Optimizing Game Loading Time: Sprite.CreateSprite() and Base64

The loading time of our game has been horrendous lately. I made an editor script where we just press Alt + Shift + Z to play the game in Unity editor. My definition of loading time is the time duration from pressing Alt + Shift + Z to the game being actually playable in the editor. The loading time scales depending on the game file being loaded. The bigger the school, the slower the loading time. I tested the starting map and even that loads after around 40 seconds. That’s horrible!

Reducing this load time is substantial. It will greatly improve productivity. When making games, we usually spend most of our time testing instead of writing code. Just imagine, you write a little change to your code, then you go to the editor. It compiles which also takes time. Then you play the game which takes at least another 40 seconds. When you test 10 edits, you’ll lose 400 seconds (6.67 minutes). At 50 edits, that amounts to 33.35 minutes. How many edits do you make in a day?

I spent some time today to try to reduce this unproductive time. I was able to shelve 20 seconds.


I don’t know why but this method is so slow. The way we maintain our sprite assets might have something to do with this. We keep all our sprites in StreamingAssets folder. At runtime, we load them and put all of them in a single big atlas. We usually use Sprite.CreateSprite() on our dynamically loaded UI. Some buttons requests for icons that are in the atlas. I’ve fixed this by using RawImage instead of Image. Using RawImage requires extra handling, but we already have the data that it needs. Just avoiding Sprite.CreateSprite() saved 10 seconds in loading time.

public void InitIconImage() {
    PackedTextureEntry packedEntry = PackedTextureManager.GetPackedTextureEntry(this.buildSelectorObject.SpriteId);
    Assertion.AssertNotNull(packedEntry, this.buildSelectorObject.SpriteId, this.gameObject);

    // selectorRawImage is a RawImage
    this.selectorRawImage.texture = packedEntry.Atlas;
    this.selectorRawImage.uvRect = packedEntry.UvRect;

Too many string to int conversion

We use XML in our save file for readability purposes. It’s good for us and for our players. As a tile based game, a two dimensional vector with integers is ubiquitous. In fact we have a class for this even before Unity released theirs. We have a data model which is just a list of these vectors. In XML, it looks like this:

    <Entry x="93" y="7" />
    <Entry x="93" y="8" />
    <Entry x="93" y="9" />
    <Entry x="93" y="10" />
    <Entry x="93" y="11" />
    ... <!-- More entries -->

On deserialization, we convert the x and y values from these XML attributes which are strings. The more entries it has, the more conversion needed, the slower the loading time.

I did some research and I stumbled upon good old Base64. C# already has methods for converting byte arrays to Base64 strings and vice versa. So I thought of storing the coordinates to a byte array and use its Base64 string in the XML save file. This was easier than I thought.

private const string BASE64 = "Base64";
private const int INTEGER_SIZE = sizeof(int);

private readonly List<byte> bytes = new List<byte>();

private void WritePositions(XmlWriter writer, HashSet<IntVector2> positions) {

    // Write positions as Base 64 string of positions as bytes
    this.bytes.Capacity = zone.PositionsCount * (INTEGER_SIZE * 2); // Multiply by two here because there are two integers per position
    foreach (IntVector2 position in zone.Positions) {

    string base64 = Convert.ToBase64String(bytes.ToArray());
    writer.WriteAttributeString(BASE64, base64);


private void LoadPositions(SimpleXmlNode node, HashSet<IntVector2> positions) {
    SimpleXmlNode positionsNode = node.FindFirstNodeInChildren(POSITIONS);
    if(positionsNode != null) {
        byte[] bytes = Convert.FromBase64String(positionsNode.GetAttribute(BASE64));
        int positionCount = bytes.Length / (INTEGER_SIZE * 2);

        int index = 0;
        for (int i = 0; i < positionCount; ++i) {
            int x = BitConverter.ToInt32(bytes, index);
            index += INTEGER_SIZE;

            int y = BitConverter.ToInt32(bytes, index);
            index += INTEGER_SIZE;

            positions.Add(new IntVector2(x, y));

The classes BitConverter and Convert can be found in System namespace. This shelved another 10 seconds (or more if the data were big) which is huge! This method of saving a list of integer vectors is way faster than converting individual integer attributes.


I didn’t think I would be able to reduce loading time by at least 20 seconds with just a couple of changes. Shelving 5 seconds would have been fine by me. The work is far from over, though. I could still optimize other parts but they are more complicated. It will take more effort and I might introduce bugs. I’m happy with my huge savings for now.


All Moving Sprites in One Draw Call!!?

I never thought it was possible, but yes it is. I’ve tried it and used it in Academia. This is with the help of somebody else’s script of course. It’s not without drawback.

Optimizing multiple moving sprites is I think the hardest one to pull off. Even if they’re already sharing the same material, they still don’t batch consistently. When they do batch, there’s so many of them that the process of dynamic batching will actually take more CPU cycles and bogs down the frame rate. Most of the time, the batched sprites are scattered all around.

Moving sprites are batched like this

While browsing on the internet for ideas on how to optimize, I stumbled upon the great old Unify Community wiki. I haven’t visited the site for a while and I saw that there are a lot of new shared scripts. So I went through each one and found this SpriteManager by Brady Wright (thank you so much!). It’s actually an old script. It claims that it can draw multiple moving objects in one draw call. A bold statement, I thought. I went ahead and tried a simple test. The usage is a bit rough, but I was able to roll with it. To my amazement, the script works. I spent the next two days integrating the gem to our game. Now, all the moving agents in the game are rendered in a single draw call.


I took a good look at the code to understand how it works. It works by maintaining a single mesh for all sprites that were requested from the manager. The manager transforms each vertex in LateUpdate() using the Transform of the owner of each sprite. It then reassigns these transformed vertices to the mesh thus updating the positions of the quads. The engine just renders this one single big mesh. It’s so simple and clever.

An obvious drawback is it doesn’t have culling. Objects that are outside the view of the camera would still be rendered. The transformation of vertices is still computed on those sprites that should have been culled. This isn’t a real issue to us because most players play the game with the camera zoomed out most of the time. We expect players would play this way because one would naturally want to see how their school is doing. Culling will not contribute much.

Eventually of course, we’d probably hit a certain number of sprites where this system starts to bog down. We could always implement culling or try to optimize the code by ourselves when we hit that number. Until then, we’re just letting it be. It’s already good.

Static Batching for Dynamically Loaded Objects

From time to time, I’ve been trying to optimize the rendering of Academia. I’ve employed many techniques (here , here , and here) to promote dynamic batching already and it seems like they’re not enough. My hypothesis is that dynamic batching hogs the CPU when there are more objects to batch (like around 10k to 15k saved by batching). It’s time to look else where.

So I took a closer look on static batching. Static batching, as I’ve read, is faster than dynamic batching, but at the cost of more memory. We’re still below our memory budget so static batching is worth a try. It is also quite smart. I tried a simple scene that was statically batched, then I deactivated an object. The object became invisible, of course, but I was genuinely impressed. I thought the engine would create a new bigger mesh from all the combined mesh. As such, you can no longer update this big mesh. I’m glad I was wrong since this implies that I don’t have to code something like remembering the vertices of the object and update the UVs when an object becomes deactivated. I don’t have to change the existing game logic at all.

Majority of the objects in the game are non-moving

Static batching was already ticked in the Player Settings since ever, but when I run the frame debugger, I don’t see any static batching at all. Majority of the objects in the game were already set to static. I’ve been using Unity since 2011 and there are still stuff that I don’t fully understand. I’ve discovered that static batching does not work on dynamically loaded objects, which is how our game runs. It only works for objects that were baked in the scene and was set to static. For dynamically loaded objects, you have to manually call StaticBatchingUtility.Combine().


When I implemented the manager that will call StaticBatchingUtility.Combine()  strategically, the first issue I encountered was that you can’t update the UVs of the combined mesh. This means that if an object has UV manipulation like sprite animations, it cannot be added to the meshes to be combined. The engine throws an error if you manipulate the UV of a combined object. I had to filter out which objects can be combined. I used the StaticBatchingUtility.Combine()  method which accepts an array of GameObjects to be combined.

When I finished the static batching manager class, I thought everything will just work. When I run the game, still zero static batches. This brought me a lot of more trial and error and more headache that I almost gave up. When I was about to shut down, I read the documentation again and it said “The GameObject in gos must have MeshFilter components attached for this to work.” It was such a eureka moment. I usually prepare my game elements where the meshes or sprites are child GameObjects. I was combining the parent objects instead of the child objects that contains the MeshFilter. Don’t make this mistake.


In the end, I got to finally make it work. It saved me 1-2ms of rendering time. It’s not much but it proved that it is indeed faster. I’ve only implemented it in just one part of the game. There are still other parts where I could apply it.

That’s it for now. If you have unique rendering optimization techniques, please do share them.


Thoughts About ECS

There’s been a lot of talk about Entity-Component-System (ECS) pattern lately. Even more interesting is that Unity is trying to go to this route when Unity 2018 comes. Part of a programmer’s career is trying out different stuff. I’m definitely going to try Unity’s ECS. But I do have some concerns and I’m hoping that someone could shed some light.

But Unity already has components?

Currently, Unity employs the “Composition” design pattern to model its entities. Some people would like to call it “Entity-Component” pattern, but I don’t like that term because it adds to the confusion. Most people use the terms “pattern”, “architecture”, or “system” interchangeably. For simplicity, I’d just refer to it as “Composition” pattern here. Some would say that Unity is not using “pure” ECS. I find that weird. Of course not. I don’t think that Unity’s pattern is trying to be ECS. It’s an entirely different design. They do have similarities but ECS is a completely different thing.

In composition pattern, you create different component classes where each class handles a different domain. A game entity then is defined as a combination of these classes. For example, a game character might have a Transform, SpriteRenderer, AI, CharacterController, and RigidBody components. Another entity, like say a static rock, may only have Transform, SpriteRenderer, and Collider components. It doesn’t need to have those other player related components. Basically, you’re like assembling components to model your entities. This pattern is more flexible compared to using inheritance. It needs no further explanation. There are lots of articles written about this problem.

How does ECS differ then?

In terms of flexibility and concept, composition and ECS are mostly the same. The defining difference, at least for me, is the layout of component data in memory and thus affects how it should be programmed. ECS was designed with how modern computers store data in memory and use that knowledge to increase performance.

While CPU speeds keep increasing, memory access speed has not keep up. Therefore, the performance of programs is dictated by how efficient they access data from RAM. This is the reason why CPUs have caching. The CPU gets a chunk of data from RAM and hopes that the instructions that it has to do uses data that can be found in this chunk. If not, it will have to release the current chunk and get another chunk from main memory where the target data resides. Remember that RAM access is slower. If the CPU frequently fetches data from RAM, this means that it spends most of its time waiting for the fetch instead or executing instructions. This page from Game Programming Patterns explains this very well.

If you want high performance software, you’d want to have more cache hits and reduce cache misses. Ideally, you would want the bytes of your data to be aligned next to each other as this increases the chance of cache hits. This is also referred to as “cache coherence”. However, OOP languages like C# does not usually work like this. When you instantiate an object through a class, you’re allocating memory on the heap. This does not guarantee ordered alignment. The actual block of bytes of that class can be allocated in different places. You have no control over this. The C# runtime automatically does this for you. Internally, what you get in your variable is just a pointer of the memory address where the data is stored. Say you have an array of Enemy class objects. What you get is just an array of pointers. The block of bytes that represents each Enemy is not necessarily in order.

Fortunately, C# has structs. When you declare an array of a struct, you are guaranteed that the bytes of those instances are laid out next to each other. (There may be some cases that it’s not. I’m not a C# compiler expert.) This is where ECS comes in. Instead of having components as classes with methods, components are reduced to only having data. Since now it’s only data, components can no longer have polymorphic capabilities, thus components can be declared as structs. Logic now resides in the “System” part of ECS. A system collects entities with particular components that it needs. The system then executes on this set of components which in memory are ordered next to each other. This promotes more cache hits and theoretically makes the program run more efficient.

In code, composition looks like this (not necessarily Unity):

class Motion : Component {
    private Entity owner;
    private Transform transform;

    private float speed;
    private float acceleration;
    private Vector3 direction;

    public void Init() {
        this.transform = this.owner.GetComponent<Transform>();

    public void Update() {
        this.speed += this.acceleration * Time.deltaTime;
        this.transform.position += this.direction * this.speed * Time.deltaTime;

Entity runner = new Entity();

entitiesManager.Update(); // Motion.Update() is invoked here

In ECS, it would look like this (not necessarily based on an existing framework):

struct Motion : IComponent {
    public float speed;
    public float acceleration;
    public Vector3 direction;

// Framework or engine automatically injects entities
// that has both Transform and Motion components
class MovementSystem : System {
    private Container<Transform> transforms;

    private Container<Motion> motions;

    public void Update() {
        // This is faster because memory access doesn't jump too much to different locations
        for(int i = 0; i < this.EntityCount; ++i) {
            Update(ref this.transforms[i], ref this.motions[i]);

    private void Update(ref Transform transform, ref Motion motion) {
        motion.speed += motion.acceleration * Time.deltaTime;
        transform.position += motion.direction * motion.speed * Time.deltaTime;

Entity runner = new Entity();


Ecs.AddSystem(new MovementSystem());
// ... More systems

// MovementSystem.Update() is invoked here

My concerns

I haven’t really programmed a full game in ECS. I’ve made a prototype that’s not really a game. Just trying to get a grasp how it works. I’m not an expert of ECS. The concerns I’m listing here are purely subjective. I’m hoping that someone who has worked on a full game using ECS can share their experience to bust these concerns.

At the top of my head, my biggest concern is maintainability. I’m big on maintainability. The OOP way is my comfort zone. It’s where I’m good at. I’ve developed patterns with it to answer most problems. Switching to ECS is already a maintenance problem by virtue that it’s not something I’m used to.

In OOP, concepts like information hiding and encapsulation results to consistent invariants. These concepts have the largest influence on how I approach programming. I think of programming like making little inventions with few user interface as much as possible. ECS just throws all of that away. Every time I expose data, I cry a little bit inside.

Sure, I could still make some data private and provide methods for consistent mutation in some instances. But in ECS, I can no longer have a component that has a reference to other components and provide methods for clean mutation among these components. The data of these components have to be exposed and the mutation logic will be implemented in systems. This implies that any other systems have access to such data and can therefore cause mutations that was not supposed to be. This may be a non problem at all. Maybe it’s just the curse of OOP. But still, I feel anxious when I think about it.

Another maintenance issue that I see is the use of structs itself. I’m so used to using reference types that I may fail to remember that structs are value types. When you pass a class type to a mutating function, the default expectation is that the passed instance would be changed. This is not the case for structs. Structs are value types so they will be copied when passed to a function. The struct instance that you passed to a mutating function will not change unless the parameters in such function has the “ref” qualifier which means to pass by reference. So if you forgot to add “ref” to your functions, you may have a buggy code. This is just one struct issue. How many more will I discover?

struct TestStruct {
    public int x;

    public TestStruct(int x) {
        this.x = x;

private void Mutate(TestStruct instance) {
    instance.x += 5;

TestStruct test = new TestStruct(1);

Debug.Log(test.x); // Does it display 1 or 6?

Another problem that I foresee is I may not be able to reuse OOP based libraries effectively. I have lots of utility classes that I have developed throughout my career. They’re good because they are battle tested. Most Asset Store products are OOP based. Most open source/free libraries are OOP based. I may not be able to use them when using ECS since the library features that will be put inside components will need to be a struct to maintain cache coherence. These classes will have to be ported to conform to the data oriented way. It’s either you break cache coherence or rewrite libraries so they can be used in ECS way. If you break cache coherence just so you can use a library, by how much and how often can you do so? If you do it enough, you’re contradicting the reason why you are using ECS in the first place. If you’re not going to exploit cache coherence, you might as well use Composition. It’s a thing to balance and I don’t have the right answers.

The programming language itself is a problem. Since C# has OOP features and if you’re so used to OOP, the temptation is always there to use OOP solutions. If Unity is going to introduce ECS, I’m wishing that they’re going to introduce a new language for it. Something that does not allow classes or encapsulation. Go or Rust would be good fit. This way, developers would be forced to think in ECS and there’s no way for them to revert to OOP because the language does not allow it. It would not be strange for Unity to do this because they initially started with 3 supported languages. Keep C# but a more appropriate language should be introduced for their ECS.


These are my thoughts on ECS. It feels good to just let them out. I hope someone could shed some light. I hope someone can debunk my fears.

Even with these concerns, I’m still going to dive in to Unity’s ECS when it comes out. It’s a new world to explore. It’s a whole new paradigm. It will take some time to get the hang of. The OOP solutions that I know will need to have translations to ECS.

Script to Play the Main Scene from Anywhere

This is related to the multiple scene development that I’ve described before. One of the problems I had was whenever I was working on a scene and had to test, I had to locate the Main scene, load it, then hit play. After testing, I have to load the scene that I was working on again. Sometimes I forget what scene that was. As you can imagine, this is really frustrating and can eat up time.

Fortunately with Unity, you can make scripts that can run in the editor. I made a script that loads the Main scene, and run it. When the game is stopped, the script loads the previous scene that I was working on. The script can be executed via a menu item or by pressing Alt + Shift + Z.

Here it is:

public static class RunUtils {

    static RunUtils() {
        EditorApplication.playmodeStateChanged += LoadLastOpenedScene;

    // pref IDs
    private const string LAST_OPENED_SCENE = "Game.LastOpenedScene";
    private const string PLAYED_USING_RUN_UTILS = "Game.PlayedUsingRunUtils";

    // bool states
    private static bool aboutToRun = false;

    [MenuItem("Game/Run Game #&z")]
    public static void Run() {
        SceneSetup[] setups = EditorSceneManager.GetSceneManagerSetup();
        if (setups.Length > 0) {
            EditorPrefs.SetString(LAST_OPENED_SCENE, setups[0].path);

        EditorPrefs.SetBool(PLAYED_USING_RUN_UTILS, true);
        aboutToRun = true;

        // Refresh first to cause compilation and include new assets

        EditorApplication.isPlaying = true;

    private static void LoadLastOpenedScene() {
        if (EditorApplication.isPlaying || EditorApplication.isCompiling) {
            // changed to playing or compiling
            // no need to do anything

        if (!EditorPrefs.GetBool(PLAYED_USING_RUN_UTILS)) {
            // this means that normal play mode might have been used

        // We added this check because this method is still invoked while EditorApplication.isPlaying is false
        // We only load the last opened scene when the aboutToRun flag is "consumed"
        if (aboutToRun) {
            aboutToRun = false;

        // at this point, the scene has stopped playing
        // so we load the last opened scene
        string lastScene = EditorPrefs.GetString(LAST_OPENED_SCENE);
        if (!string.IsNullOrEmpty(lastScene)) {

        EditorPrefs.SetBool(PLAYED_USING_RUN_UTILS, false); // reset flag


This is a fun little script. The key here is to register a delegate to the event EditorApplication.playmodeStateChanged which is called whenever the editor’s play state is changed (play, paused, or stopped). By using EditorPrefs, we can save some variables like LAST_OPENED_SCENE and PLAYED_USING_RUN_UTILS which we can access whenever the delegate is invoked. I used EditorPrefs rather than normal variables because I can’t guarantee how the editor keeps variables. I’m not sure if the values of variables are still around like say after recompilation.

Note also here that I call AssetDatabase.Refresh() before running the scene. This is because I prefer to turn off auto refresh (you can do this in Preferences). It’s annoying when the editor keeps compiling when your game is already too big. But without calling AssetDatabase.Refresh(), I might also forget to refresh the editor before running the game. By doing it this way, I ensure recompilation whenever I run the game and there was updated code.


My Thresholds for Refactoring

I’ve read a book called Code Complete about a decade ago. The mantra of that book is “the goal of software is to manage complexity.” That phrase is repeated throughout the book and it has stuck with me ever since. The idea is that since we make software to manage complexity, we should strive to manage the complexity of our code as well. The book promotes maintainability as the primary metric to work on when making software. If your code is maintainable, other software metrics like robustness, correctness, and efficiency will follow. I don’t want to elaborate how and why this is. The book writes about this in detail. I have experienced this to be true based on my years of software development.

This mindset of maintainability first has also served me well when I became a game programmer. Video games are somewhat in a precarious position because you’re not managing a real world complexity. Rather, the complexity is artificial or made entirely from imagination. It is in the form of game design which includes the graphical representation and game mechanics. It’s still a complexity, nonetheless, and often times more complicated, too. Over the years of programming, I have developed this brain personality that starts nagging me when I reach thresholds in my code. I don’t want to write about how to make your code maintainable per se. Instead, I wanted to share some common thresholds in code that I keep in mind that results to refactoring. By doing this frequently and sometimes unconsciously, it leads to maintainable code. In programming lingo, these are also referred to as “code smells”. There are lots of those but I’ll only enumerate the ones that I commonly encounter, thus also commonly refactor.

Cyclomatic Complexity

Cyclomatic complexity is simply the total number of control flow branches within a function, method or routine plus one. Control flow branches are caused by if, while, for, foreach, do-while, catch clauses (including finally), and each case in a switch statement. Each AND (&&) and OR (||) operators are also included in the count. I consider anything that breaks the linear flow of statements as a branch. Thinking about cyclomatic complexity like this makes it transferrable to any programming language that may have special operators or constructs. My threshold is 10. This means that when I see a function that has cyclomatic complexity of more than 10, that function needs to be refactored. This can simply be fixed by introducing a new function and move some of the control flow branching from the original function to the new one.

For example, the following method has a cyclomatic complexity of 4 (remember plus one) and is safe from being ripped apart:

public static Transform FindTransformByName(Transform transformRoot, string name) {
    if (name.Equals(transformRoot.name)) {
        return transformRoot;

    // search in children
    Transform searchResult = null;
    foreach (Transform childTransform in transformRoot) {
        searchResult = FindTransformByName(childTransform, name);
        if (searchResult != null) {
           return searchResult;

    return null;

Of course there are exceptions. If there’s a switch statement that has 10 or more case branches and there’s no other logical way to write such function, I’ll probably just leave it be. But this rarely happens. There are many ways to restructure code to prevent such pattern.

Parameter Count

This is self explanatory. It’s the number of paramaters that a function/method/routine has. My threshold for this is 7. Seven parameters to me is already a stretch and I itch to refactor such function already. More than 7 is really a no no.

There are many ways to fix this. Introducing a new function and moving some parameters there might do the trick. A set of parameters that logically goes together might be bundled into a struct or another class. Functions with this amount of parameters are probably big. Turning the whole function into a separate class where the parameters become member variables could be viable. This way you can further refactor the big function to several smaller functions all contained in the new class.

Nesting Depth

Nesting is when you indent a block of code due to a control flow like if, for, or while. For example the following code has nesting of 3:

for(...) {
    if(...) {
        if(...) {

Deep nesting is bad because it makes code hard to read thus making it less maintainable. My threshold is 3. Three is already an eyesore to me. Any nest I see beyond that is refactored away to another function.

Inheritance Depth

I’m glad that modern programming is moving away from inheritance. Game engines like Unity uses components as building blocks instead of inheritance. In hindsight, this makes inheritance no longer the defacto way of making a game unlike in the old days. I’ll no longer elaborate why inheritance makes software less maintainable. You can search about that. Nonetheless, inheritance is still a tool that can be used properly. I do use it albeit sparingly.

Inheritance depth is the amount of ancestors in an inheritance chain. In the following example, GreatGrandChild has an inheritance depth of 3:

class Guy { ... }
class Child : Guy { ... }
class GrandChild : Child { ... }
class GreatGrandChild : GrandChild { ... }

My threshold for inheritance depth is 3. When such a thing happen, I become very careful with deriving a class in that hierarchy. An inheritance depth of more than 3 means that I may have to restructure my code. Sometimes there’s no other better way. I do let it go if adding another child class is the quickest and most logical way to do something. Just inside me, the alarm bells are already so loud when that happens. The key to managing inheritance is to not use it frequently and design it with a flat structure as much as possible, like maintain an inheritance depth of 1.

Callback Count

I want to be clear that I’m not talking about callback chain count here. That’s different. If you have JavaScript in web background, you know what I’m talking about. Callback chaining is barely used in game programming, at least for me because I avoid it. Callback count is merely the amount of callbacks that you use in a class. If you have an exposed member variable that is an Action<> or Func<> that can be assigned from outside of the class, that’s probably a callback.

What usually happens is I introduce one callback. Then later on I introduce another one. Then much later, I add a new one. Then I see this and mentally decide that it’s ugly.

My threshold for callback count is 2. Three is just too much for me. At that point, I’ll refactor that into an interface and manage instances of the interface instead. Maintaining separate callbacks becomes unwieldy and clunky when there are more of them. For example, say you have this class that has a callback count of 4:

class RouteAssignmentManager {
    private List<Action<RouteAssignment>> assignmentAddedActions = …;
    private List<Action<RouteAssignment>> assignmentRemovedActions = …;
    private List<Action<RouteAssignment>> assignmentIncreasedActions = …;
    private List<Action<RouteAssignment>> assignmentDecreasedActions = …;

    public void AddAssignmentAddedAction(Action<RouteAssignment> action) {

    public void AddAssignmentRemovedAction(Action<RouteAssignment> action) {

    public void AddAssignmentIncreasedAction(Action<RouteAssignment> action) {

    public void AddAssignmentDecreasedAction(Action<RouteAssignment> action) {

Whenever you want to add any one of these callbacks, you may want to add the others as well because they are logically related. For example, when you forget to implement the removed action pair for an add action, you might introduce bugs.

In this case, it’s better to turn it into an interface and manage only a single list of instances, instead:

interface RouteAssignmentManagerObserver {
    void OnAssignmentAdded(RouteAssignment assignment);
    void OnAssignmentRemoved(RouteAssignment assignment);
    void OnAssignmentIncreased(RouteAssignment assignment);
    void OnAssignmentDecreased(RouteAssignment assignment);

class RouteAssignmentManager {
    private List<RouteAssignmentManagerObserver> observers = …;

    public void AddObserver(RouteAssignmentManagerObserver observer) {

This is better because every time you want to listen to changes to RouteAssignmentManager, you are ensured that you don’t forget to implement the other related methods. If you want to add a kind of change that you want to broadcast, you can simply add a method on the interface. This will break the implementing classes and will force you to look at those classes to see if they need to implement the new change notification. You reduce the amount of variables that you need to manage, too. Now, it’s just one list.

There are exceptions of course. If the callbacks are totally unrelated, sometimes it’s not logical to introduce an interface to combine them. But still, I would try to limit callbacks to 2.

Duplicate Code

When I’m writing a function, sometimes I need a few lines of code that I wrote a while ago. I’ll then copy and paste that to the function I’m writing. I just introduced duplicate code. This first copy is usually ok. Another copy, however, is no longer good. This means that I now have the same code in three different places. This is my threshold for duplicate code. Three strikes, and to a new function you go. For code with more than 5 lines of code, 2 copies should send them to a function.

Lines of Code

I’m referring to lines of code of a single class here. I usually write just one class for each file. My threshold is 500 lines of code. I don’t refactor right away if I see a class with this many lines of code. Rather, it signals to me that it’s probably no longer appropriate to add more logic or some kind of maintenance code to the class. Small functions like getter and setter are still ok to add. Small functions that are totally related to the class variables are also fine. 500 lines of code tells me that new features should probably be coded in a new class.


I’m not saying that these thresholds are the most optimal. These are just my preference and has become a habit. You can decide on your own thresholds. For example, if you can handle a cyclomatic complexity of 20, good for you because I can’t.

Lastly, if you want a bigger collection of code smells and how to refactor them, there’s also a book for that. I’m not advertising the books and there’s no way I’m related to the authors. I’m saying that they’re good books and they are helpful.

How about you? What are your thresholds?

Simple Query System

While making games using multiple scene development, I always find the need to get some value or object that is provided by a system found in another scene. Like the signal system, I want a similar system where I could query values without knowing the type of the provider.

When I posted my signal system on Reddit, someone pointed me to this blog post which describes a similar system that solves the same problem. I have to admit, I think it’s the better way. What I liked about it is that the events are type safe even to its parameters. Refactoring this system would indeed be a lot better.

I thought about this and I realized that I could make a query system with the same goals in mind – type safety. I did make one and I think I’ll be using this to our game from now on. If you’re going to use this code, I have to warn you that it’s not that battle tested compared to an older query system that I’ve been using. So use it with caution.


I’ll start with how it would be used. There are three entities that interact with the system. These are the query request, query requester and the query provider. The requester and provider can be in the same scene or in different scenes. The query request is nothing more than a class that holds the parameters of the request. This is a sample request:

public class TestRequest : QueryRequest {

    private readonly int intParam;
    private readonly string stringParam;

    public TestRequest(int intParam, string stringParam) {
        this.intParam = intParam;
        this.stringParam = stringParam;

    public int IntParam {
        get {
            return intParam;

    public string StringParam {
        get {
            return stringParam;


TestRequest here is an immutable class with two parameters. An actual game query request class may have more parameters.

Registration of the provider looks like this:

public class QueryManagerTestProvider : MonoBehaviour {

    private void Awake() {
        QueryManager.RegisterProvider<TestRequest, GameObject>(TestProvider);

    private GameObject TestProvider(TestRequest request) {
        // Log the parameters just to show that they are passed
        Debug.Log("intParam: " + request.IntParam);
        Debug.Log("stringParam: " + request.StringParam);

        return this.gameObject;


Providers are simply delegates so that they can be written easily. A single MonoBehaviour or class may register more than one providers.

A sample query request will look like this:

public class QueryManagerTestRequester : MonoBehaviour {

    private void Start() {
        GameObject result = QueryManager.Query<TestRequest, GameObject>(new TestRequest(77, "Hello Query Manager"));
        Debug.Log("result: " + result.gameObject.name);


Putting them together will output this result:


In my test, QueryManagerTestProvider and QueryManagerTestRequester are placed in different scenes so that I could verify that it works with such setup. Essentially, what I have done here is I was able to retrieve a GameObject that is from a different scene.

Framework Code

Let’s start with the base class QueryRequest:

public abstract class QueryRequest {

Yeah, that is it. It’s just used as a common type for all request classes. You can see it’s usage later. Why is it not an interface? The intent is for the user to make separate request classes that are lightweight. This prevents having big classes or MonoBehaviour classes that can also act as a query request.

Next is the QueryManagerImplementation class which will be the class that is used internally by the static class QueryManager. It looks like this:

class QueryManagerImplementation {

    private delegate object QueryProvider(QueryRequest request); // The internal delegate that we manage

    private Dictionary<Type, QueryProvider> providerMap = new Dictionary<Type, QueryProvider>();

    public QueryManagerImplementation() {

    public void RegisterProvider<R, V>(QueryManager.QueryProvider<R, V> provider) where R : QueryRequest {
        Type type = typeof(R);
        Assertion.Assert(!this.providerMap.ContainsKey(type)); // Should not contain the provider for a certain request yet

        // Make the internal delegate which invokes the generic delegate
        QueryProvider internalProvider = delegate (QueryRequest request) {
            return provider((R)request);
        this.providerMap[type] = internalProvider;

    public bool HasProvider<R>() where R : QueryRequest {
        return this.providerMap.ContainsKey(typeof(R));

    public V Query<R, V>(R request) where R : QueryRequest {
        Type type = typeof(R);

        // Invoke the provider
        // This will throw an error if a provider does not exist
        return (V)this.providerMap[type](request);


Providers are simply maintained in a Dictionary where the key is the type of the request. The methods are self explanatory. The generic identifier R refers to the requester type and V refers to the type of the result value. Notice how QueryRequest is used here as a qualifier for R which limits what class types can be passed.

Finally, the static QueryManager class looks like this:

public static class QueryManager {

    public delegate V QueryProvider<R, V>(R request) where R : QueryRequest;

    private static readonly QueryManagerImplementation INTERNAL_MANAGER = new QueryManagerImplementation();

    public static void RegisterProvider<R, V>(QueryProvider<R, V> provider) where R : QueryRequest {

    public static bool HasProvider<R>() where R : QueryRequest {
        return INTERNAL_MANAGER.HasProvider<R>();

    public static V Query<R, V>(R request) where R : QueryRequest {
        return INTERNAL_MANAGER.Query<R, V>(request);



Like Signals, calling query requests are slower compared to just calling the provider method directly. Careful not to use it inside Update() or in parts where it is invoked repeatedly like in loops. Cache the results if you can.

Another disadvantage of this system is garbage. Every time you want to query for a value, you may instantiate a request class especially if it’s immutable. Although this can be mitigated by making a request class mutable and maintain only one instance of it then reuse that instance to make queries. Another way is to use the Factory design pattern for request instances.

This is it for now. See you next time.

Simple Signal System

In the last post about multiple scene development, I described a signal system briefly that we used for inter scene communication. This post will be about that system in full detail. I don’t claim that this is the best method. It works for us at the least but it can definitely be improved.

For objects in different scenes to communicate, I wanted something where objects don’t need to have the reference of the object in the other scene. I don’t want objects to have direct references of objects in another scene as much as possible (can’t completely be avoided). The rationale is that I wanted to avoid missing references when excluding loading of certain scenes at certain times. The game should still work even when some scenes don’t exist. I also wanted it to be simple and straightforward to use. This system will be exposed for software modding that we’re going to support later on.

Usage Sample

The gist of the system is that you have Signal instances which you can dispatch. Other parts of the game can then add listeners to a signal. When a signal is dispatched, the listeners will be executed. Parameters can also be added to a signal which can be queried by the listeners. Think of it as an observer pattern without a particular subject.

In our games, we maintain a static class where all game signals are collected:

public static class GameSignals {

    public static readonly Signal CLOSE_ALL_PANELS = new Signal("CloseAllPanels");

    public static readonly Signal HOUR_CHANGED = new Signal("HourChanged");
    public static readonly Signal DAY_CHANGED = new Signal("DayChanged");

    public static readonly Signal REQUEST_SCENE_TRANSITION = new Signal("RequestSceneTransition");

    ... // The rest of the other signals

Any component in any scene can add listeners to any of these signals. For example:

public class CashflowManager : MonoBehaviour {

    void Awake() {

    void OnDestroy() {

    private void ProcessCashflow(ISignalParameters parameters) {
        // Detailed cashflow processing implementation here


During gameplay, one of the systems in the game (could be in any scene) will then dispatch the signal:

public class DayTimer : MonoBehaviour {
    void Update() {
        ... // Some code

        if(this.hour == 24) {
            // New day

        ... // Some code

Some signals require parameters. Dispatching signals with parameters looks like this:

Signal signal = GameSignals.ADD_FUNDS;
signal.AddParameter(Params.VALUE, dailyCashflow);
signal.AddParameter(Params.ITEM, "CashFlow");


Let’s start with parameters. Support for parameters is a must. Most of the time, it’s not used but you will definitely need them at times. Parameters are stored as string and object pair. The following is the interface for adding and getting parameters:

public interface ISignalParameters {

    void AddParameter(string key, object value);
    object GetParameter(string key);
    bool HasParameter(string key);


An implementation of this interface looks like this:

class ConcreteSignalParameters : ISignalParameters {

    private Stack<Dictionary<string, object>> parameterStack = new Stack<Dictionary<string, object>>();

    public ConcreteSignalParameters() {

    public void AddParameter(string key, object value) {
        this.parameterStack.Peek()[key] = value;
    public object GetParameter(string key) {
        return this.parameterStack.Peek()[key];

    public bool HasParameter(string key) {
        return this.parameterStack.Peek().ContainsKey(key);

    public void PushParameters() {

    public void PopParameters() {

    public bool HasParameters {
        get {
            return this.parameterStack.Count > 0;

    // Pool of parameter dictionaries
    private static readonly Pool<Dictionary<string, object>> POOL = new Pool<Dictionary<string, object>>();

    private static Dictionary<string, object> NewParameterMap() {
        Dictionary<string, object> newInstance = POOL.Request();
        return newInstance;


Parameters are stored in a Dictionary. Notice that we keep a pool of instances of these and there’s a stack manipulation involved. I’ll explain later.

The Signal class looks like this:

public class Signal {

    private readonly string name;
    private ConcreteSignalParameters parameters;

    public delegate void SignalListener(ISignalParameters parameters);
    private List<SignalListener> listenerList = new List<SignalListener>();

    public Signal(string name) {
        this.name = name;
        this.listenerList = new List<SignalListener>();

    public void ClearParameters() {
        // Lazy initialize because most signals don't have parameters
        if (this.parameters == null) {
            this.parameters = new ConcreteSignalParameters();


    public void AddParameter(string key, object value) {
        // This will throw an error if ClearParameters() is not invoked prior to calling this method
        this.parameters.AddParameter(key, value);

    public void AddListener(SignalListener listener) {

    public void RemoveListener(SignalListener listener) {

    public void Dispatch() {
        try {
            if (this.listenerList.Count == 0) {
                Debug.LogWarning("There are no listeners to the signal: " + this.name);

            for (int i = 0; i < this.listenerList.Count; ++i) {
                // invoke the listeners
                this.listenerList[i](this.parameters); // note that the parameters passed may be null if there was none specified
        } finally {
            // Pop parameters for every Dispatch
            // We check if there was indeed parameters because there are signals that are dispatched without parameters
            if (this.parameters != null && this.parameters.HasParameters) {

    public string Name {
        get {
            return name;


The Signal class keeps track of parameters and the list of listeners. Listeners are simply implemented as delegates that accept ISignalParameters. I specifically chose delegates so that listeners are easier to write. One could just write a method and that can be used already. A component may also listen to multiple signals. The listeners could just be different methods within the component. If it were implemented as an interface, writing listeners would be more tedious.

The Dispatch() method simply invokes all listeners then pops the parameters.

Why the usage of stack in ConcreteSignalParameters?

The old implementation of this class only maintains one instance of parameter dictionary. Unfortunately, this causes a bug when a listener of a signal dispatches the same signal again with new set of parameters. When this happens, the next listeners will use the new parameters instead of the original ones. We needed a way to somehow store parameters before dispatching the same signal. Thus, the usage of stack that you see here. The assumption here is Signal.ClearParameters() should always be invoked prior to Signal.AddParameter(). A new parameter dictionary is pushed whenever ClearParameters() is called. A parameter dictionary is popped in every Dispatch(). This is also the reason why we used an interface when passing parameters to listeners. Listeners need not know about the stack manipulation of parameters.

When to use?

This system is obviously less efficient than calling a method directly. We only use this for things that are not required to run every frame. This can be mitigated by moving objects to their proper scene. For example, if an object requires a reference from another scene for its Update(), maybe that object should belong to that scene. If this is not possible, then we may need that reference of the object from another scene. One can use GameObject.Find() but I try to avoid that approach. We use another system for querying objects/values which will be a topic for another day.

Multi Scene Development in Unity

Modern image processing software like Photoshop have this concept called “layers” in which the user can compartmentalize different parts of the whole image using this construct. Scenes in Unity can also be thought of as layers. Each can be developed separately but they can be combined together to form the whole game. Over the years of using Unity, I have been using this concept to develop my games.

Engine design wise, I think “scene” is a wrong term. It’s because a scene denotes something visual, something you can see, or something tangible. Intuitively, a user would think that a scene is for visible objects with their physics settings. Essentially though, a scene is a container of GameObjects. In this sense, a scene could contain only non visible stuff like game managers and data managers. A scene could contain only cameras, or only UI. What this means is you can make separate scenes for your game world, game logic, cameras, and various UI panels. When the game is run, you can additively load these scenes together which collectively makes up the whole game.

The Basics

Let’s say you have two scenes – “GameWorld” and “GameHud”. It’s a good idea to make another scene which is responsible for loading the other scenes additively. Usually this is called the “Main” or “Startup” scene. Whenever you want to run the game, you play this Main scene instead. This scene has a component that looks like this:

using UnityEngine.SceneManagement;
... // Other using statements

class Main : MonoBehaviour {
    void Awake() {

    private static void LoadSceneAdditively(string sceneName) {
        SceneManager.LoadScene(sceneName, LoadSceneMode.Additive);

Why do it this way?

I have three main reasons – better organization, better work delegation, improves maintainability.

Better Organization

For example, I always make a separate scene for the system that handle audio. I have a separate scene for the system that handles persistence (saving and loading). Game static data management is contained in its separate scene. I can manage the different domains of game elements in their respective scene. Like in Academia, each handling of character types is its own scene.

Sometimes, it’s reasonable to put usage of third party software in a separate scene. If the game is level based (Level 1, Level 2, etc), each level can be a separate scene. Each UI panel or screen can be its own scene. A game feature can be its own scene. You open up many ways to organize your game.

Better Work Delegation

Now that the game is structured into multiple scenes, it becomes easier to assign work to team mates. You can assign someone to work on particular scenes or tell them to add a new scene (if it’s a new feature). An obvious example is to distribute work among different UI panels or screens. Doing it this way also reduces collaboration conflict. You can design your development such that artists and designers edit only certain scenes while programmers also work on their logic only scenes. It’s also now easier to know who’s fault it is when the game breaks. Just ask who owns the scene.

Improves Maintainability

Maintainability is very important to me. It’s that one thing that a software project has that allows it to be shipped. By separating domains in different scenes, the areas of where to fix or where to update becomes more clear. A new feature now means a new scene. Scenes that work well can be forgotten. This means more space for your mind to focus on more important things. You can also think of scenes as another way to refactor your game. You can split up a big scene or merge smaller scenes.

Just some of our scenes

How do you render different scenes into one display?

You use multiple cameras. You can check this awesome post. This is the reason why I maintain cameras in a separate scene. In every game, I maintain at least two cameras, one for the game world, the other for UI.

How do scenes communicate?

There are lots of ways to do this. Personally, I prefer something that does not require a hard reference of some class or component. We use a signal system to signal something and interested parties will then do something. Kind of like a glorified observer pattern. We also use a query system. It’s like a signal system, but it returns a value. There are times when a program in a scene requires a value from another scene. We use this query system to get that value without requiring the reference of the provider. I’d like to delve into details, but this post is getting longer. So maybe in another post.

Taking it Further

Instead of hardcoding the scenes to load like I showed in the Basics part, you can make something like an XML or JSON config for this. Every time there’s a new scene to load, you just add that scene to this file.

In our system, I designed something called “Load Profiles”. Basically, we maintain different profiles of loading scenes. For example, there’s a profile for loading the complete game, then there’s another profile that loads the game directly to gameplay which is used for testing. This can also be used to load different scenes for different platforms.

You can design different patterns on how you load your scenes depending on your needs. This is it for now. See you next time.


Better C# Enums

I hate C#’s enum type. I avoid it if I can. Consider this example of an enum of planets:

public enum Planet {
    PLUTO // Pluto is a planet!!!

This is good enough to use for a while until a need arises to get the mass of a planet. So we do something like this:

// Returns the mass of the planet in 10^24 kg
public float GetMass(Planet planet) {
    switch(planet) {
        case Planet.MERCURY:
            return 0.330;

        case Planet.VENUS:
            return 4.87f;

        case Planet.EARTH:
            return 5.97f;


        case Planet.PLUTO:
            return 0.0146f;

How about planet diameter? Another switch statement? What about for density? Gravity? Escape velocity? Just think about the amount of switch statements that you’re going to maintain. You can argue that you can use a Dictionary instead but that is still clunky. A Dictionary mapping per data? No way.

There is a better way and I’ll show you how. This may already be common knowledge among non Unity programmers but I’d like to put up this redundant topic again in my blog for such people who may not know this, especially beginners. I would also like to keep it simple. There will be no need for reflection in my examples. There’s also no inheritance.

Basically, you can use a class as an enum. Why a class? It’s just better. You can store any number of arbitrary data. You can even store a routine or a function. You can do a lot with it. The only requirement is that it should be immutable, which means that the state of an instance of the class cannot change throughout the duration of the program. Here’s a version of the Planet enum as a class:

    public class Planet {
        // The different values
        public static readonly Planet MERCURY = new Planet(0, 0.330f, 4879, 5427, 3.7f);
        public static readonly Planet VENUS = new Planet(1, 4.87f, 12104, 5243, 8.9f);
        public static readonly Planet EARTH = new Planet(2, 5.97f, 12756, 5514, 9.8f);
        public static readonly Planet MARS = new Planet(3, 0.642f, 6792, 3933, 3.7f);
        public static readonly Planet JUPITER = new Planet(4, 1898.0f, 142984, 1326, 23.1f);
        public static readonly Planet SATURN = new Planet(5, 568.0f, 120536, 687, 9.0f);
        public static readonly Planet URANUS = new Planet(6, 86.8f, 51118, 1271, 8.7f);
        public static readonly Planet NEPTUNE = new Planet(7, 102.0f, 49528, 1638, 11.0f);
        public static readonly Planet PLUTO = new Planet(8, 0.0146f, 2370, 2095, 0.7f);

        // Use readonly to maintain immutability
        private readonly int id;
        private readonly float mass; // in 10^24 kg
        private readonly int diameter; // in km
        private readonly int density; // in kg/m^3
        private readonly float gravity; // in m/s^2

        // We use a private constructor because this should not be instantiated
        // anywhere else.
        private Planet(int id, float mass, int diameter, int density, float gravity) {
            this.id = id;
            this.mass = mass;
            this.diameter = diameter;
            this.density = density;
            this.gravity = gravity;

        public int Id {
            get {
                return id;

        public float Mass {
            get {
                return mass;

        public int Diameter {
            get {
                return diameter;

        public int Density {
            get {
                return density;

        public float Gravity {
            get {
                return gravity;

To maintain immutability, all member variables should be readonly. Once they are assigned, they can no longer be changed. This is important because as an enum, it’s internal values should not change. Each enum value is then implemented as a static readonly instance of the class.

How is this used? It’s the same with normal enum but there’s more:

// Use it like an enum
ship.TargetPlanet = Planet.NEPTUNE;

// Want to know the target planet's mass?
float mass = ship.TargetPlanet.Mass;

// Density?
int density = ship.TargetPlanet.Density;

We have eliminated the need for switch statements or dictionaries to maintain the different planets’ information. Want a new planet stat? Just add a new member variable and specify them on instantiation.

How about conversion from other data types? Like say convert from int id to Planet instance? This is easy. Usually I add a public and static method for these conversions. For example:

public class Planet {

    // The different values
    public static readonly Planet MERCURY = new Planet(0, 0.330f, 4879, 5427, 3.7f);
    public static readonly Planet VENUS = new Planet(1, 4.87f, 12104, 5243, 8.9f);
    public static readonly Planet EARTH = new Planet(2, 5.97f, 12756, 5514, 9.8f);
    public static readonly Planet MARS = new Planet(3, 0.642f, 6792, 3933, 3.7f);
    public static readonly Planet JUPITER = new Planet(4, 1898.0f, 142984, 1326, 23.1f);
    public static readonly Planet SATURN = new Planet(5, 568.0f, 120536, 687, 9.0f);
    public static readonly Planet URANUS = new Planet(6, 86.8f, 51118, 1271, 8.7f);
    public static readonly Planet NEPTUNE = new Planet(7, 102.0f, 49528, 1638, 11.0f);
    public static readonly Planet PLUTO = new Planet(8, 0.0146f, 2370, 2095, 0.7f);

    // This can be used to loop through all planets
    public static Planet[] ALL = new Planet[] {

    // Converts the specified id to a Planet instance
    public static Planet Convert(int id) {
        for(int i = 0; i < ALL.Length; ++i) {
            if(ALL[i].Id == id) {
                return ALL[i];

        // return ALL[id] could also work here but what if a non sequential id is used?

        throw new Exception("Cannot convert {0} to a Planet.".FormatWith(id));


// Usage
Planet planet = Planet.Convert(someIntPlanet);

Want to convert from a string id? Add a string member variable that will hold this value. Instead of using an array such as ALL[], you can use a Dictionary like this:

private static Dictionary<string, Planet> ALL = new Dictionary<string, Planet>() {
    { MERCURY.TextId, MERCURY },
    { VENUS.TextId, VENUS },
    { EARTH.TextId, EARTH },
    { PLUTO.TextId, PLUTO },

// Converts the specified string to a Planet instance
public static Planet Convert(string id) {
    return ALL[id];

You can support any type of conversion that you like.

There’s so much more you can do. You can now add functions. You can do something like this:

Planet currentPlanet = Planet.VENUS;

The coolest thing for me is you can specify different actions or behavior to the enum values. Something like this (It’s very contrived but you get the idea.):

public static readonly Planet EARTH = new Planet(2, 5.97f, 12756, 5514, 9.8f, delegate(Ship ship) {
    // Actions on land of ship

public static readonly Planet MARS = new Planet(3, 0.642f, 6792, 3933, 3.7f, delegate(Ship ship) {
    // Actions on land of ship

By simply turning your enum into a class, you’ve upgraded it to something more organized yet also more feature packed. You could also use advance features like reflection and inheritance, but most of the time, you don’t need to.

That’s it for now. Hope this helps.