Optimizing Game Loading Time: Sprite.CreateSprite() and Base64

The loading time of our game has been horrendous lately. I made an editor script where we just press Alt + Shift + Z to play the game in Unity editor. My definition of loading time is the time duration from pressing Alt + Shift + Z to the game being actually playable in the editor. The loading time scales depending on the game file being loaded. The bigger the school, the slower the loading time. I tested the starting map and even that loads after around 40 seconds. That’s horrible!

Reducing this load time is substantial. It will greatly improve productivity. When making games, we usually spend most of our time testing instead of writing code. Just imagine, you write a little change to your code, then you go to the editor. It compiles which also takes time. Then you play the game which takes at least another 40 seconds. When you test 10 edits, you’ll lose 400 seconds (6.67 minutes). At 50 edits, that amounts to 33.35 minutes. How many edits do you make in a day?

I spent some time today to try to reduce this unproductive time. I was able to shelve 20 seconds.

Sprite.CreateSprite()

I don’t know why but this method is so slow. The way we maintain our sprite assets might have something to do with this. We keep all our sprites in StreamingAssets folder. At runtime, we load them and put all of them in a single big atlas. We usually use Sprite.CreateSprite() on our dynamically loaded UI. Some buttons requests for icons that are in the atlas. I’ve fixed this by using RawImage instead of Image. Using RawImage requires extra handling, but we already have the data that it needs. Just avoiding Sprite.CreateSprite() saved 10 seconds in loading time.

public void InitIconImage() {
    PackedTextureEntry packedEntry = PackedTextureManager.GetPackedTextureEntry(this.buildSelectorObject.SpriteId);
    Assertion.AssertNotNull(packedEntry, this.buildSelectorObject.SpriteId, this.gameObject);

    // selectorRawImage is a RawImage
    this.selectorRawImage.texture = packedEntry.Atlas;
    this.selectorRawImage.uvRect = packedEntry.UvRect;
}

Too many string to int conversion

We use XML in our save file for readability purposes. It’s good for us and for our players. As a tile based game, a two dimensional vector with integers is ubiquitous. In fact we have a class for this even before Unity released theirs. We have a data model which is just a list of these vectors. In XML, it looks like this:

<Positions>
    <Entry x="93" y="7" />
    <Entry x="93" y="8" />
    <Entry x="93" y="9" />
    <Entry x="93" y="10" />
    <Entry x="93" y="11" />
    ... <!-- More entries -->
</Positions>

On deserialization, we convert the x and y values from these XML attributes which are strings. The more entries it has, the more conversion needed, the slower the loading time.

I did some research and I stumbled upon good old Base64. C# already has methods for converting byte arrays to Base64 strings and vice versa. So I thought of storing the coordinates to a byte array and use its Base64 string in the XML save file. This was easier than I thought.

private const string BASE64 = "Base64";
private const int INTEGER_SIZE = sizeof(int);

private readonly List<byte> bytes = new List<byte>();

private void WritePositions(XmlWriter writer, HashSet<IntVector2> positions) {
    writer.WriteStartElement(POSITIONS);

    // Write positions as Base 64 string of positions as bytes
    this.bytes.Clear();
    this.bytes.Capacity = zone.PositionsCount * (INTEGER_SIZE * 2); // Multiply by two here because there are two integers per position
    foreach (IntVector2 position in zone.Positions) {
        bytes.AddRange(BitConverter.GetBytes(position.x));
        bytes.AddRange(BitConverter.GetBytes(position.y));
    }

    string base64 = Convert.ToBase64String(bytes.ToArray());
    writer.WriteAttributeString(BASE64, base64);

    writer.WriteEndElement();
}

private void LoadPositions(SimpleXmlNode node, HashSet<IntVector2> positions) {
    SimpleXmlNode positionsNode = node.FindFirstNodeInChildren(POSITIONS);
    if(positionsNode != null) {
        byte[] bytes = Convert.FromBase64String(positionsNode.GetAttribute(BASE64));
        int positionCount = bytes.Length / (INTEGER_SIZE * 2);

        int index = 0;
        for (int i = 0; i < positionCount; ++i) {
            int x = BitConverter.ToInt32(bytes, index);
            index += INTEGER_SIZE;

            int y = BitConverter.ToInt32(bytes, index);
            index += INTEGER_SIZE;

            positions.Add(new IntVector2(x, y));
        }
    }
}

The classes BitConverter and Convert can be found in System namespace. This shelved another 10 seconds (or more if the data were big) which is huge! This method of saving a list of integer vectors is way faster than converting individual integer attributes.

Conclusion

I didn’t think I would be able to reduce loading time by at least 20 seconds with just a couple of changes. Shelving 5 seconds would have been fine by me. The work is far from over, though. I could still optimize other parts but they are more complicated. It will take more effort and I might introduce bugs. I’m happy with my huge savings for now.

Advertisements

Reducing Compile Time in Unity Using Assembly Definition Files

Academia has gotten so big that its compile time is so horrendous at around 60-100 seconds. I’ve done everything I could like moving third party assets and our common framework to Plugins folder. I’ve even move the project to a solid state drive. This is why Unity’s new feature named Assembly Definition Files is music to my ears. You can create an asmdef file in any folder. Any C# code in the folder and its child folders will be compiled to its own dll. This means that the source files in the folder will be skipped during recompilation if there are no changes in it or on any of its asmdef dependencies.

I’ve been using this feature for weeks now. I’m still not done. I’m still making asmdef files on the project’s many source files. It wasn’t as easy as I thought it would be. You can’t just create a definition file on a folder and expect everything to work. After lots of trial and error, I’ve developed some sort of a “process” when creating these files. I imagine making asmdef files like fighting zombies. The zombies here are the source files that has not been collected to an asmdef yet. By making asmdef files, you convert these zombies to fight for you. What I mean is if you successfully make an asmdef file, you can use that file as dependency to future asmdef files. As you make more of this asmdef files, it gets easier to convert more zombies to your side.

One at a time

When I first started using asmdef, what I did was I created these files all over the place whenever I can. When I see a compile error, I make an asmdef on those files hoping that it will be fixed. Boy was I so wrong.

What worked for me is to create an asmdef file only after all compile errors are fixed. There’s bound to be errors because you may not know the dependencies of the source files in the folder. Once you’ve “elevated” a code to an asmdef, they expect that their references are also in asmdef. It’s a good practice to create asmdef files from folders with zero dependencies to ones with more.

Start with third party assets or frameworks

This is kind of obvious as third party assets don’t have dependencies to your project (unless you hacked them and added a dependency to your code). These asmdef files shall be your army when you begin attacking your game code.

From generic/common code to specific code

When you start making asmdef files to your game code, identify which are the generic/common code that any of your game’s systems can access but does not have references to any such systems. Isolate these code to its own folder and create an asmdef for it. Add third party asmdef files as dependencies if necessary.

From here, identify the next group of code with least dependencies and make the next asmdef on such folder. The existing asmdef files can be used as dependencies. Attack from least dependencies to ones with more.

Special case for editor scripts

You can create asmdef files for editor scripts. Just make sure you don’t mix non-editor scripts in its asmdef. Also ensure that you alter its settings such that it’s only included in editor platform. This can be done by unticking Any Platform and ticking only Editor under Include Platforms.

EditorAsmdef

Fast compilation? Not always

I have over 70 asmdef files now (more will come). Overkill? I don’t know. There are projects out there bigger than ours. There’s certainly an effect on compilation time but it depends on the code. If the code is on a common/generic asmdef such that it’s required by numerous other asmdef files, the compilation time is the same or even slower. The compilation time is only faster when only specific code has changed. The more specific the code, the faster the compile time is. Hopefully, we spend most of our time on such code. These are mostly new features and specific game systems. Generic code are supposed to be stable. They should not change much.

Side effects

One side effect when using asmdef that I like is enforced dependencies. Now you get compile errors when you try to add a reference of some code that was not defined as a dependency.

A side effect that I hate is slower Visual Studio loading time. Each asmdef is considered as a separate Visual Studio project. Whenever you move code to a different folder or create a new asmdef, the IDE reloads which means it has to reload all of those 70+ Visual Studio projects that I have right now. Opening the IDE for the first time also does this. It’s annoying.

All Moving Sprites in One Draw Call!!?

I never thought it was possible, but yes it is. I’ve tried it and used it in Academia. This is with the help of somebody else’s script of course. It’s not without drawback.

Optimizing multiple moving sprites is I think the hardest one to pull off. Even if they’re already sharing the same material, they still don’t batch consistently. When they do batch, there’s so many of them that the process of dynamic batching will actually take more CPU cycles and bogs down the frame rate. Most of the time, the batched sprites are scattered all around.

CharacterBatch
Moving sprites are batched like this

While browsing on the internet for ideas on how to optimize, I stumbled upon the great old Unify Community wiki. I haven’t visited the site for a while and I saw that there are a lot of new shared scripts. So I went through each one and found this SpriteManager by Brady Wright (thank you so much!). It’s actually an old script. It claims that it can draw multiple moving objects in one draw call. A bold statement, I thought. I went ahead and tried a simple test. The usage is a bit rough, but I was able to roll with it. To my amazement, the script works. I spent the next two days integrating the gem to our game. Now, all the moving agents in the game are rendered in a single draw call.

OneDrawCall

I took a good look at the code to understand how it works. It works by maintaining a single mesh for all sprites that were requested from the manager. The manager transforms each vertex in LateUpdate() using the Transform of the owner of each sprite. It then reassigns these transformed vertices to the mesh thus updating the positions of the quads. The engine just renders this one single big mesh. It’s so simple and clever.

An obvious drawback is it doesn’t have culling. Objects that are outside the view of the camera would still be rendered. The transformation of vertices is still computed on those sprites that should have been culled. This isn’t a real issue to us because most players play the game with the camera zoomed out most of the time. We expect players would play this way because one would naturally want to see how their school is doing. Culling will not contribute much.

Eventually of course, we’d probably hit a certain number of sprites where this system starts to bog down. We could always implement culling or try to optimize the code by ourselves when we hit that number. Until then, we’re just letting it be. It’s already good.

Static Batching for Dynamically Loaded Objects

From time to time, I’ve been trying to optimize the rendering of Academia. I’ve employed many techniques (here , here , and here) to promote dynamic batching already and it seems like they’re not enough. My hypothesis is that dynamic batching hogs the CPU when there are more objects to batch (like around 10k to 15k saved by batching). It’s time to look else where.

So I took a closer look on static batching. Static batching, as I’ve read, is faster than dynamic batching, but at the cost of more memory. We’re still below our memory budget so static batching is worth a try. It is also quite smart. I tried a simple scene that was statically batched, then I deactivated an object. The object became invisible, of course, but I was genuinely impressed. I thought the engine would create a new bigger mesh from all the combined mesh. As such, you can no longer update this big mesh. I’m glad I was wrong since this implies that I don’t have to code something like remembering the vertices of the object and update the UVs when an object becomes deactivated. I don’t have to change the existing game logic at all.

YuugeBatch
Majority of the objects in the game are non-moving

Static batching was already ticked in the Player Settings since ever, but when I run the frame debugger, I don’t see any static batching at all. Majority of the objects in the game were already set to static. I’ve been using Unity since 2011 and there are still stuff that I don’t fully understand. I’ve discovered that static batching does not work on dynamically loaded objects, which is how our game runs. It only works for objects that were baked in the scene and was set to static. For dynamically loaded objects, you have to manually call StaticBatchingUtility.Combine().

Gotchas

When I implemented the manager that will call StaticBatchingUtility.Combine()  strategically, the first issue I encountered was that you can’t update the UVs of the combined mesh. This means that if an object has UV manipulation like sprite animations, it cannot be added to the meshes to be combined. The engine throws an error if you manipulate the UV of a combined object. I had to filter out which objects can be combined. I used the StaticBatchingUtility.Combine()  method which accepts an array of GameObjects to be combined.

When I finished the static batching manager class, I thought everything will just work. When I run the game, still zero static batches. This brought me a lot of more trial and error and more headache that I almost gave up. When I was about to shut down, I read the documentation again and it said “The GameObject in gos must have MeshFilter components attached for this to work.” It was such a eureka moment. I usually prepare my game elements where the meshes or sprites are child GameObjects. I was combining the parent objects instead of the child objects that contains the MeshFilter. Don’t make this mistake.

Conclusion

In the end, I got to finally make it work. It saved me 1-2ms of rendering time. It’s not much but it proved that it is indeed faster. I’ve only implemented it in just one part of the game. There are still other parts where I could apply it.

That’s it for now. If you have unique rendering optimization techniques, please do share them.

 

Script to Play the Main Scene from Anywhere

This is related to the multiple scene development that I’ve described before. One of the problems I had was whenever I was working on a scene and had to test, I had to locate the Main scene, load it, then hit play. After testing, I have to load the scene that I was working on again. Sometimes I forget what scene that was. As you can imagine, this is really frustrating and can eat up time.

Fortunately with Unity, you can make scripts that can run in the editor. I made a script that loads the Main scene, and run it. When the game is stopped, the script loads the previous scene that I was working on. The script can be executed via a menu item or by pressing Alt + Shift + Z.

Here it is:

[InitializeOnLoad]
public static class RunUtils {

    static RunUtils() {
        EditorApplication.playmodeStateChanged += LoadLastOpenedScene;
    }

    // pref IDs
    private const string LAST_OPENED_SCENE = "Game.LastOpenedScene";
    private const string PLAYED_USING_RUN_UTILS = "Game.PlayedUsingRunUtils";

    // bool states
    private static bool aboutToRun = false;

    [MenuItem("Game/Run Game #&z")]
    public static void Run() {
        SceneSetup[] setups = EditorSceneManager.GetSceneManagerSetup();
        if (setups.Length > 0) {
            EditorPrefs.SetString(LAST_OPENED_SCENE, setups[0].path);
        }

        EditorPrefs.SetBool(PLAYED_USING_RUN_UTILS, true);
        aboutToRun = true;

        // Refresh first to cause compilation and include new assets
        AssetDatabase.Refresh();

        EditorSceneManager.SaveCurrentModifiedScenesIfUserWantsTo();
        EditorSceneManager.OpenScene("Assets/Game/Scenes/Main.unity");
        EditorApplication.isPlaying = true;
    }

    private static void LoadLastOpenedScene() {
        if (EditorApplication.isPlaying || EditorApplication.isCompiling) {
            // changed to playing or compiling
            // no need to do anything
            return;
        }

        if (!EditorPrefs.GetBool(PLAYED_USING_RUN_UTILS)) {
            // this means that normal play mode might have been used
            return;
        }

        // We added this check because this method is still invoked while EditorApplication.isPlaying is false
        // We only load the last opened scene when the aboutToRun flag is "consumed"
        if (aboutToRun) {
            aboutToRun = false;
            return;
        }

        // at this point, the scene has stopped playing
        // so we load the last opened scene
        string lastScene = EditorPrefs.GetString(LAST_OPENED_SCENE);
        if (!string.IsNullOrEmpty(lastScene)) {
            EditorSceneManager.OpenScene(lastScene);
        }

        EditorPrefs.SetBool(PLAYED_USING_RUN_UTILS, false); // reset flag
    }

}

This is a fun little script. The key here is to register a delegate to the event EditorApplication.playmodeStateChanged which is called whenever the editor’s play state is changed (play, paused, or stopped). By using EditorPrefs, we can save some variables like LAST_OPENED_SCENE and PLAYED_USING_RUN_UTILS which we can access whenever the delegate is invoked. I used EditorPrefs rather than normal variables because I can’t guarantee how the editor keeps variables. I’m not sure if the values of variables are still around like say after recompilation.

Note also here that I call AssetDatabase.Refresh() before running the scene. This is because I prefer to turn off auto refresh (you can do this in Preferences). It’s annoying when the editor keeps compiling when your game is already too big. But without calling AssetDatabase.Refresh(), I might also forget to refresh the editor before running the game. By doing it this way, I ensure recompilation whenever I run the game and there was updated code.

Enjoy!

My Thresholds for Refactoring

I’ve read a book called Code Complete about a decade ago. The mantra of that book is “the goal of software is to manage complexity.” That phrase is repeated throughout the book and it has stuck with me ever since. The idea is that since we make software to manage complexity, we should strive to manage the complexity of our code as well. The book promotes maintainability as the primary metric to work on when making software. If your code is maintainable, other software metrics like robustness, correctness, and efficiency will follow. I don’t want to elaborate how and why this is. The book writes about this in detail. I have experienced this to be true based on my years of software development.

This mindset of maintainability first has also served me well when I became a game programmer. Video games are somewhat in a precarious position because you’re not managing a real world complexity. Rather, the complexity is artificial or made entirely from imagination. It is in the form of game design which includes the graphical representation and game mechanics. It’s still a complexity, nonetheless, and often times more complicated, too. Over the years of programming, I have developed this brain personality that starts nagging me when I reach thresholds in my code. I don’t want to write about how to make your code maintainable per se. Instead, I wanted to share some common thresholds in code that I keep in mind that results to refactoring. By doing this frequently and sometimes unconsciously, it leads to maintainable code. In programming lingo, these are also referred to as “code smells”. There are lots of those but I’ll only enumerate the ones that I commonly encounter, thus also commonly refactor.

Cyclomatic Complexity

Cyclomatic complexity is simply the total number of control flow branches within a function, method or routine plus one. Control flow branches are caused by if, while, for, foreach, do-while, catch clauses (including finally), and each case in a switch statement. Each AND (&&) and OR (||) operators are also included in the count. I consider anything that breaks the linear flow of statements as a branch. Thinking about cyclomatic complexity like this makes it transferrable to any programming language that may have special operators or constructs. My threshold is 10. This means that when I see a function that has cyclomatic complexity of more than 10, that function needs to be refactored. This can simply be fixed by introducing a new function and move some of the control flow branching from the original function to the new one.

For example, the following method has a cyclomatic complexity of 4 (remember plus one) and is safe from being ripped apart:

public static Transform FindTransformByName(Transform transformRoot, string name) {
    if (name.Equals(transformRoot.name)) {
        return transformRoot;
    }

    // search in children
    Transform searchResult = null;
    foreach (Transform childTransform in transformRoot) {
        searchResult = FindTransformByName(childTransform, name);
        if (searchResult != null) {
           return searchResult;
        }
    }

    return null;
}

Of course there are exceptions. If there’s a switch statement that has 10 or more case branches and there’s no other logical way to write such function, I’ll probably just leave it be. But this rarely happens. There are many ways to restructure code to prevent such pattern.

Parameter Count

This is self explanatory. It’s the number of paramaters that a function/method/routine has. My threshold for this is 7. Seven parameters to me is already a stretch and I itch to refactor such function already. More than 7 is really a no no.

There are many ways to fix this. Introducing a new function and moving some parameters there might do the trick. A set of parameters that logically goes together might be bundled into a struct or another class. Functions with this amount of parameters are probably big. Turning the whole function into a separate class where the parameters become member variables could be viable. This way you can further refactor the big function to several smaller functions all contained in the new class.

Nesting Depth

Nesting is when you indent a block of code due to a control flow like if, for, or while. For example the following code has nesting of 3:

for(...) {
    if(...) {
        if(...) {
        }
    }
}

Deep nesting is bad because it makes code hard to read thus making it less maintainable. My threshold is 3. Three is already an eyesore to me. Any nest I see beyond that is refactored away to another function.

Inheritance Depth

I’m glad that modern programming is moving away from inheritance. Game engines like Unity uses components as building blocks instead of inheritance. In hindsight, this makes inheritance no longer the defacto way of making a game unlike in the old days. I’ll no longer elaborate why inheritance makes software less maintainable. You can search about that. Nonetheless, inheritance is still a tool that can be used properly. I do use it albeit sparingly.

Inheritance depth is the amount of ancestors in an inheritance chain. In the following example, GreatGrandChild has an inheritance depth of 3:

class Guy { ... }
class Child : Guy { ... }
class GrandChild : Child { ... }
class GreatGrandChild : GrandChild { ... }

My threshold for inheritance depth is 3. When such a thing happen, I become very careful with deriving a class in that hierarchy. An inheritance depth of more than 3 means that I may have to restructure my code. Sometimes there’s no other better way. I do let it go if adding another child class is the quickest and most logical way to do something. Just inside me, the alarm bells are already so loud when that happens. The key to managing inheritance is to not use it frequently and design it with a flat structure as much as possible, like maintain an inheritance depth of 1.

Callback Count

I want to be clear that I’m not talking about callback chain count here. That’s different. If you have JavaScript in web background, you know what I’m talking about. Callback chaining is barely used in game programming, at least for me because I avoid it. Callback count is merely the amount of callbacks that you use in a class. If you have an exposed member variable that is an Action<> or Func<> that can be assigned from outside of the class, that’s probably a callback.

What usually happens is I introduce one callback. Then later on I introduce another one. Then much later, I add a new one. Then I see this and mentally decide that it’s ugly.

My threshold for callback count is 2. Three is just too much for me. At that point, I’ll refactor that into an interface and manage instances of the interface instead. Maintaining separate callbacks becomes unwieldy and clunky when there are more of them. For example, say you have this class that has a callback count of 4:

class RouteAssignmentManager {
    private List<Action<RouteAssignment>> assignmentAddedActions = …;
    private List<Action<RouteAssignment>> assignmentRemovedActions = …;
    private List<Action<RouteAssignment>> assignmentIncreasedActions = …;
    private List<Action<RouteAssignment>> assignmentDecreasedActions = …;

    public void AddAssignmentAddedAction(Action<RouteAssignment> action) {
        ...
    }

    public void AddAssignmentRemovedAction(Action<RouteAssignment> action) {
        ...
    }

    public void AddAssignmentIncreasedAction(Action<RouteAssignment> action) {
        ...
    }

    public void AddAssignmentDecreasedAction(Action<RouteAssignment> action) {
        ...
    }
}

Whenever you want to add any one of these callbacks, you may want to add the others as well because they are logically related. For example, when you forget to implement the removed action pair for an add action, you might introduce bugs.

In this case, it’s better to turn it into an interface and manage only a single list of instances, instead:

interface RouteAssignmentManagerObserver {
    void OnAssignmentAdded(RouteAssignment assignment);
    void OnAssignmentRemoved(RouteAssignment assignment);
    void OnAssignmentIncreased(RouteAssignment assignment);
    void OnAssignmentDecreased(RouteAssignment assignment);
}

class RouteAssignmentManager {
    private List<RouteAssignmentManagerObserver> observers = …;

    public void AddObserver(RouteAssignmentManagerObserver observer) {
        ...
    }
}

This is better because every time you want to listen to changes to RouteAssignmentManager, you are ensured that you don’t forget to implement the other related methods. If you want to add a kind of change that you want to broadcast, you can simply add a method on the interface. This will break the implementing classes and will force you to look at those classes to see if they need to implement the new change notification. You reduce the amount of variables that you need to manage, too. Now, it’s just one list.

There are exceptions of course. If the callbacks are totally unrelated, sometimes it’s not logical to introduce an interface to combine them. But still, I would try to limit callbacks to 2.

Duplicate Code

When I’m writing a function, sometimes I need a few lines of code that I wrote a while ago. I’ll then copy and paste that to the function I’m writing. I just introduced duplicate code. This first copy is usually ok. Another copy, however, is no longer good. This means that I now have the same code in three different places. This is my threshold for duplicate code. Three strikes, and to a new function you go. For code with more than 5 lines of code, 2 copies should send them to a function.

Lines of Code

I’m referring to lines of code of a single class here. I usually write just one class for each file. My threshold is 500 lines of code. I don’t refactor right away if I see a class with this many lines of code. Rather, it signals to me that it’s probably no longer appropriate to add more logic or some kind of maintenance code to the class. Small functions like getter and setter are still ok to add. Small functions that are totally related to the class variables are also fine. 500 lines of code tells me that new features should probably be coded in a new class.

Conclusion

I’m not saying that these thresholds are the most optimal. These are just my preference and has become a habit. You can decide on your own thresholds. For example, if you can handle a cyclomatic complexity of 20, good for you because I can’t.

Lastly, if you want a bigger collection of code smells and how to refactor them, there’s also a book for that. I’m not advertising the books and there’s no way I’m related to the authors. I’m saying that they’re good books and they are helpful.

How about you? What are your thresholds?

Simple Query System

While making games using multiple scene development, I always find the need to get some value or object that is provided by a system found in another scene. Like the signal system, I want a similar system where I could query values without knowing the type of the provider.

When I posted my signal system on Reddit, someone pointed me to this blog post which describes a similar system that solves the same problem. I have to admit, I think it’s the better way. What I liked about it is that the events are type safe even to its parameters. Refactoring this system would indeed be a lot better.

I thought about this and I realized that I could make a query system with the same goals in mind – type safety. I did make one and I think I’ll be using this to our game from now on. If you’re going to use this code, I have to warn you that it’s not that battle tested compared to an older query system that I’ve been using. So use it with caution.

Usage

I’ll start with how it would be used. There are three entities that interact with the system. These are the query request, query requester and the query provider. The requester and provider can be in the same scene or in different scenes. The query request is nothing more than a class that holds the parameters of the request. This is a sample request:

public class TestRequest : QueryRequest {

    private readonly int intParam;
    private readonly string stringParam;

    public TestRequest(int intParam, string stringParam) {
        this.intParam = intParam;
        this.stringParam = stringParam;
    }

    public int IntParam {
        get {
            return intParam;
        }
    }

    public string StringParam {
        get {
            return stringParam;
        }
    }

}

TestRequest here is an immutable class with two parameters. An actual game query request class may have more parameters.

Registration of the provider looks like this:

public class QueryManagerTestProvider : MonoBehaviour {

    private void Awake() {
        QueryManager.RegisterProvider<TestRequest, GameObject>(TestProvider);
    }

    private GameObject TestProvider(TestRequest request) {
        // Log the parameters just to show that they are passed
        Debug.Log("intParam: " + request.IntParam);
        Debug.Log("stringParam: " + request.StringParam);

        return this.gameObject;
    }

}

Providers are simply delegates so that they can be written easily. A single MonoBehaviour or class may register more than one providers.

A sample query request will look like this:

public class QueryManagerTestRequester : MonoBehaviour {

    private void Start() {
        GameObject result = QueryManager.Query<TestRequest, GameObject>(new TestRequest(77, "Hello Query Manager"));
        Debug.Log("result: " + result.gameObject.name);
    }

}

Putting them together will output this result:

QueryManagerResult

In my test, QueryManagerTestProvider and QueryManagerTestRequester are placed in different scenes so that I could verify that it works with such setup. Essentially, what I have done here is I was able to retrieve a GameObject that is from a different scene.

Framework Code

Let’s start with the base class QueryRequest:

public abstract class QueryRequest {
}

Yeah, that is it. It’s just used as a common type for all request classes. You can see it’s usage later. Why is it not an interface? The intent is for the user to make separate request classes that are lightweight. This prevents having big classes or MonoBehaviour classes that can also act as a query request.

Next is the QueryManagerImplementation class which will be the class that is used internally by the static class QueryManager. It looks like this:

class QueryManagerImplementation {

    private delegate object QueryProvider(QueryRequest request); // The internal delegate that we manage

    private Dictionary<Type, QueryProvider> providerMap = new Dictionary<Type, QueryProvider>();

    public QueryManagerImplementation() {
    }

    public void RegisterProvider<R, V>(QueryManager.QueryProvider<R, V> provider) where R : QueryRequest {
        Type type = typeof(R);
        Assertion.Assert(!this.providerMap.ContainsKey(type)); // Should not contain the provider for a certain request yet

        // Make the internal delegate which invokes the generic delegate
        QueryProvider internalProvider = delegate (QueryRequest request) {
            return provider((R)request);
        };
        this.providerMap[type] = internalProvider;
    }

    public bool HasProvider<R>() where R : QueryRequest {
        return this.providerMap.ContainsKey(typeof(R));
    }

    public V Query<R, V>(R request) where R : QueryRequest {
        Type type = typeof(R);

        // Invoke the provider
        // This will throw an error if a provider does not exist
        return (V)this.providerMap[type](request);
    }

}

Providers are simply maintained in a Dictionary where the key is the type of the request. The methods are self explanatory. The generic identifier R refers to the requester type and V refers to the type of the result value. Notice how QueryRequest is used here as a qualifier for R which limits what class types can be passed.

Finally, the static QueryManager class looks like this:

public static class QueryManager {

    public delegate V QueryProvider<R, V>(R request) where R : QueryRequest;

    private static readonly QueryManagerImplementation INTERNAL_MANAGER = new QueryManagerImplementation();

    public static void RegisterProvider<R, V>(QueryProvider<R, V> provider) where R : QueryRequest {
        INTERNAL_MANAGER.RegisterProvider(provider);
    }

    public static bool HasProvider<R>() where R : QueryRequest {
        return INTERNAL_MANAGER.HasProvider<R>();
    }

    public static V Query<R, V>(R request) where R : QueryRequest {
        return INTERNAL_MANAGER.Query<R, V>(request);
    }

}

Caveats

Like Signals, calling query requests are slower compared to just calling the provider method directly. Careful not to use it inside Update() or in parts where it is invoked repeatedly like in loops. Cache the results if you can.

Another disadvantage of this system is garbage. Every time you want to query for a value, you may instantiate a request class especially if it’s immutable. Although this can be mitigated by making a request class mutable and maintain only one instance of it then reuse that instance to make queries. Another way is to use the Factory design pattern for request instances.

This is it for now. See you next time.

Simple Signal System

In the last post about multiple scene development, I described a signal system briefly that we used for inter scene communication. This post will be about that system in full detail. I don’t claim that this is the best method. It works for us at the least but it can definitely be improved.

For objects in different scenes to communicate, I wanted something where objects don’t need to have the reference of the object in the other scene. I don’t want objects to have direct references of objects in another scene as much as possible (can’t completely be avoided). The rationale is that I wanted to avoid missing references when excluding loading of certain scenes at certain times. The game should still work even when some scenes don’t exist. I also wanted it to be simple and straightforward to use. This system will be exposed for software modding that we’re going to support later on.

Usage Sample

The gist of the system is that you have Signal instances which you can dispatch. Other parts of the game can then add listeners to a signal. When a signal is dispatched, the listeners will be executed. Parameters can also be added to a signal which can be queried by the listeners. Think of it as an observer pattern without a particular subject.

In our games, we maintain a static class where all game signals are collected:

public static class GameSignals {

    public static readonly Signal CLOSE_ALL_PANELS = new Signal("CloseAllPanels");

    public static readonly Signal HOUR_CHANGED = new Signal("HourChanged");
    public static readonly Signal DAY_CHANGED = new Signal("DayChanged");

    public static readonly Signal REQUEST_SCENE_TRANSITION = new Signal("RequestSceneTransition");

    ... // The rest of the other signals
}

Any component in any scene can add listeners to any of these signals. For example:

public class CashflowManager : MonoBehaviour {

    void Awake() {
        GameSignals.DAY_CHANGED.AddListener(ProcessCashflow);
    }

    void OnDestroy() {
        GameSignals.DAY_CHANGED.RemoveListener(ProcessCashflow);
    }

    private void ProcessCashflow(ISignalParameters parameters) {
        // Detailed cashflow processing implementation here
    }

}

During gameplay, one of the systems in the game (could be in any scene) will then dispatch the signal:

public class DayTimer : MonoBehaviour {
    void Update() {
        ... // Some code

        if(this.hour == 24) {
            // New day
            GameSignals.DAY_CHANGED.Dispatch();
        }

        ... // Some code
    }
}

Some signals require parameters. Dispatching signals with parameters looks like this:

Signal signal = GameSignals.ADD_FUNDS;
signal.ClearParameters();
signal.AddParameter(Params.VALUE, dailyCashflow);
signal.AddParameter(Params.ITEM, "CashFlow");
signal.Dispatch();

Implementation

Let’s start with parameters. Support for parameters is a must. Most of the time, it’s not used but you will definitely need them at times. Parameters are stored as string and object pair. The following is the interface for adding and getting parameters:

public interface ISignalParameters {

    void AddParameter(string key, object value);
    object GetParameter(string key);
    bool HasParameter(string key);

}

An implementation of this interface looks like this:

class ConcreteSignalParameters : ISignalParameters {

    private Stack<Dictionary<string, object>> parameterStack = new Stack<Dictionary<string, object>>();

    public ConcreteSignalParameters() {
    }

    public void AddParameter(string key, object value) {
        this.parameterStack.Peek()[key] = value;
    }
		
    public object GetParameter(string key) {
        return this.parameterStack.Peek()[key];
    }

    public bool HasParameter(string key) {
        return this.parameterStack.Peek().ContainsKey(key);
    }

    public void PushParameters() {
        this.parameterStack.Push(NewParameterMap());
    }

    public void PopParameters() {
        POOL.Recycle(this.parameterStack.Peek());
        this.parameterStack.Pop();
    }

    public bool HasParameters {
        get {
            return this.parameterStack.Count > 0;
        }
    }

    // Pool of parameter dictionaries
    private static readonly Pool<Dictionary<string, object>> POOL = new Pool<Dictionary<string, object>>();

    private static Dictionary<string, object> NewParameterMap() {
        Dictionary<string, object> newInstance = POOL.Request();
        newInstance.Clear();
        return newInstance;
    }

}

Parameters are stored in a Dictionary. Notice that we keep a pool of instances of these and there’s a stack manipulation involved. I’ll explain later.

The Signal class looks like this:

public class Signal {

    private readonly string name;
    private ConcreteSignalParameters parameters;

    public delegate void SignalListener(ISignalParameters parameters);
    private List<SignalListener> listenerList = new List<SignalListener>();

    public Signal(string name) {
        this.name = name;
        this.listenerList = new List<SignalListener>();
    }

    public void ClearParameters() {
        // Lazy initialize because most signals don't have parameters
        if (this.parameters == null) {
            this.parameters = new ConcreteSignalParameters();
        }

        this.parameters.PushParameters();
    }

    public void AddParameter(string key, object value) {
        // This will throw an error if ClearParameters() is not invoked prior to calling this method
        this.parameters.AddParameter(key, value);
    }

    public void AddListener(SignalListener listener) {
        this.listenerList.Add(listener);
    }

    public void RemoveListener(SignalListener listener) {
        this.listenerList.Remove(listener);
    }

    public void Dispatch() {
        try {
            if (this.listenerList.Count == 0) {
                Debug.LogWarning("There are no listeners to the signal: " + this.name);
            }

            for (int i = 0; i < this.listenerList.Count; ++i) {
                // invoke the listeners
                this.listenerList[i](this.parameters); // note that the parameters passed may be null if there was none specified
            }
        } finally {
            // Pop parameters for every Dispatch
            // We check if there was indeed parameters because there are signals that are dispatched without parameters
            if (this.parameters != null && this.parameters.HasParameters) {
                this.parameters.PopParameters();
            }
        }
    }

    public string Name {
        get {
            return name;
        }
    }

}

The Signal class keeps track of parameters and the list of listeners. Listeners are simply implemented as delegates that accept ISignalParameters. I specifically chose delegates so that listeners are easier to write. One could just write a method and that can be used already. A component may also listen to multiple signals. The listeners could just be different methods within the component. If it were implemented as an interface, writing listeners would be more tedious.

The Dispatch() method simply invokes all listeners then pops the parameters.

Why the usage of stack in ConcreteSignalParameters?

The old implementation of this class only maintains one instance of parameter dictionary. Unfortunately, this causes a bug when a listener of a signal dispatches the same signal again with new set of parameters. When this happens, the next listeners will use the new parameters instead of the original ones. We needed a way to somehow store parameters before dispatching the same signal. Thus, the usage of stack that you see here. The assumption here is Signal.ClearParameters() should always be invoked prior to Signal.AddParameter(). A new parameter dictionary is pushed whenever ClearParameters() is called. A parameter dictionary is popped in every Dispatch(). This is also the reason why we used an interface when passing parameters to listeners. Listeners need not know about the stack manipulation of parameters.

When to use?

This system is obviously less efficient than calling a method directly. We only use this for things that are not required to run every frame. This can be mitigated by moving objects to their proper scene. For example, if an object requires a reference from another scene for its Update(), maybe that object should belong to that scene. If this is not possible, then we may need that reference of the object from another scene. One can use GameObject.Find() but I try to avoid that approach. We use another system for querying objects/values which will be a topic for another day.

Multi Scene Development in Unity

Modern image processing software like Photoshop have this concept called “layers” in which the user can compartmentalize different parts of the whole image using this construct. Scenes in Unity can also be thought of as layers. Each can be developed separately but they can be combined together to form the whole game. Over the years of using Unity, I have been using this concept to develop my games.

Engine design wise, I think “scene” is a wrong term. It’s because a scene denotes something visual, something you can see, or something tangible. Intuitively, a user would think that a scene is for visible objects with their physics settings. Essentially though, a scene is a container of GameObjects. In this sense, a scene could contain only non visible stuff like game managers and data managers. A scene could contain only cameras, or only UI. What this means is you can make separate scenes for your game world, game logic, cameras, and various UI panels. When the game is run, you can additively load these scenes together which collectively makes up the whole game.

The Basics

Let’s say you have two scenes – “GameWorld” and “GameHud”. It’s a good idea to make another scene which is responsible for loading the other scenes additively. Usually this is called the “Main” or “Startup” scene. Whenever you want to run the game, you play this Main scene instead. This scene has a component that looks like this:

using UnityEngine.SceneManagement;
... // Other using statements

class Main : MonoBehaviour {
    void Awake() {
        LoadSceneAdditively("GameWorld");
        LoadSceneAdditively("GameHud");
    }

    private static void LoadSceneAdditively(string sceneName) {
        SceneManager.LoadScene(sceneName, LoadSceneMode.Additive);
    }
}

Why do it this way?

I have three main reasons – better organization, better work delegation, improves maintainability.

Better Organization

For example, I always make a separate scene for the system that handle audio. I have a separate scene for the system that handles persistence (saving and loading). Game static data management is contained in its separate scene. I can manage the different domains of game elements in their respective scene. Like in Academia, each handling of character types is its own scene.

Sometimes, it’s reasonable to put usage of third party software in a separate scene. If the game is level based (Level 1, Level 2, etc), each level can be a separate scene. Each UI panel or screen can be its own scene. A game feature can be its own scene. You open up many ways to organize your game.

Better Work Delegation

Now that the game is structured into multiple scenes, it becomes easier to assign work to team mates. You can assign someone to work on particular scenes or tell them to add a new scene (if it’s a new feature). An obvious example is to distribute work among different UI panels or screens. Doing it this way also reduces collaboration conflict. You can design your development such that artists and designers edit only certain scenes while programmers also work on their logic only scenes. It’s also now easier to know who’s fault it is when the game breaks. Just ask who owns the scene.

Improves Maintainability

Maintainability is very important to me. It’s that one thing that a software project has that allows it to be shipped. By separating domains in different scenes, the areas of where to fix or where to update becomes more clear. A new feature now means a new scene. Scenes that work well can be forgotten. This means more space for your mind to focus on more important things. You can also think of scenes as another way to refactor your game. You can split up a big scene or merge smaller scenes.

SomeScenes
Just some of our scenes

How do you render different scenes into one display?

You use multiple cameras. You can check this awesome post. This is the reason why I maintain cameras in a separate scene. In every game, I maintain at least two cameras, one for the game world, the other for UI.

How do scenes communicate?

There are lots of ways to do this. Personally, I prefer something that does not require a hard reference of some class or component. We use a signal system to signal something and interested parties will then do something. Kind of like a glorified observer pattern. We also use a query system. It’s like a signal system, but it returns a value. There are times when a program in a scene requires a value from another scene. We use this query system to get that value without requiring the reference of the provider. I’d like to delve into details, but this post is getting longer. So maybe in another post.

Taking it Further

Instead of hardcoding the scenes to load like I showed in the Basics part, you can make something like an XML or JSON config for this. Every time there’s a new scene to load, you just add that scene to this file.

In our system, I designed something called “Load Profiles”. Basically, we maintain different profiles of loading scenes. For example, there’s a profile for loading the complete game, then there’s another profile that loads the game directly to gameplay which is used for testing. This can also be used to load different scenes for different platforms.

You can design different patterns on how you load your scenes depending on your needs. This is it for now. See you next time.

 

Better C# Enums

I hate C#’s enum type. I avoid it if I can. Consider this example of an enum of planets:

public enum Planet {
    MERCURY,
    VENUS,
    EARTH,
    MARS,
    JUPITER,
    SATURN,
    URANUS,
    NEPTUNE,
    PLUTO // Pluto is a planet!!!
}

This is good enough to use for a while until a need arises to get the mass of a planet. So we do something like this:

// Returns the mass of the planet in 10^24 kg
public float GetMass(Planet planet) {
    switch(planet) {
        case Planet.MERCURY:
            return 0.330;

        case Planet.VENUS:
            return 4.87f;

        case Planet.EARTH:
            return 5.97f;

        ...

        case Planet.PLUTO:
            return 0.0146f;
    }
}

How about planet diameter? Another switch statement? What about for density? Gravity? Escape velocity? Just think about the amount of switch statements that you’re going to maintain. You can argue that you can use a Dictionary instead but that is still clunky. A Dictionary mapping per data? No way.

There is a better way and I’ll show you how. This may already be common knowledge among non Unity programmers but I’d like to put up this redundant topic again in my blog for such people who may not know this, especially beginners. I would also like to keep it simple. There will be no need for reflection in my examples. There’s also no inheritance.

Basically, you can use a class as an enum. Why a class? It’s just better. You can store any number of arbitrary data. You can even store a routine or a function. You can do a lot with it. The only requirement is that it should be immutable, which means that the state of an instance of the class cannot change throughout the duration of the program. Here’s a version of the Planet enum as a class:

    public class Planet {
        // The different values
        public static readonly Planet MERCURY = new Planet(0, 0.330f, 4879, 5427, 3.7f);
        public static readonly Planet VENUS = new Planet(1, 4.87f, 12104, 5243, 8.9f);
        public static readonly Planet EARTH = new Planet(2, 5.97f, 12756, 5514, 9.8f);
        public static readonly Planet MARS = new Planet(3, 0.642f, 6792, 3933, 3.7f);
        public static readonly Planet JUPITER = new Planet(4, 1898.0f, 142984, 1326, 23.1f);
        public static readonly Planet SATURN = new Planet(5, 568.0f, 120536, 687, 9.0f);
        public static readonly Planet URANUS = new Planet(6, 86.8f, 51118, 1271, 8.7f);
        public static readonly Planet NEPTUNE = new Planet(7, 102.0f, 49528, 1638, 11.0f);
        public static readonly Planet PLUTO = new Planet(8, 0.0146f, 2370, 2095, 0.7f);

        // Use readonly to maintain immutability
        private readonly int id;
        private readonly float mass; // in 10^24 kg
        private readonly int diameter; // in km
        private readonly int density; // in kg/m^3
        private readonly float gravity; // in m/s^2

        // We use a private constructor because this should not be instantiated
        // anywhere else.
        private Planet(int id, float mass, int diameter, int density, float gravity) {
            this.id = id;
            this.mass = mass;
            this.diameter = diameter;
            this.density = density;
            this.gravity = gravity;
        }

        public int Id {
            get {
                return id;
            }
        }

        public float Mass {
            get {
                return mass;
            }
        }

        public int Diameter {
            get {
                return diameter;
            }
        }

        public int Density {
            get {
                return density;
            }
        }

        public float Gravity {
            get {
                return gravity;
            }
        }
    }

To maintain immutability, all member variables should be readonly. Once they are assigned, they can no longer be changed. This is important because as an enum, it’s internal values should not change. Each enum value is then implemented as a static readonly instance of the class.

How is this used? It’s the same with normal enum but there’s more:

// Use it like an enum
ship.TargetPlanet = Planet.NEPTUNE;

// Want to know the target planet's mass?
float mass = ship.TargetPlanet.Mass;

// Density?
int density = ship.TargetPlanet.Density;

We have eliminated the need for switch statements or dictionaries to maintain the different planets’ information. Want a new planet stat? Just add a new member variable and specify them on instantiation.

How about conversion from other data types? Like say convert from int id to Planet instance? This is easy. Usually I add a public and static method for these conversions. For example:

public class Planet {

    // The different values
    public static readonly Planet MERCURY = new Planet(0, 0.330f, 4879, 5427, 3.7f);
    public static readonly Planet VENUS = new Planet(1, 4.87f, 12104, 5243, 8.9f);
    public static readonly Planet EARTH = new Planet(2, 5.97f, 12756, 5514, 9.8f);
    public static readonly Planet MARS = new Planet(3, 0.642f, 6792, 3933, 3.7f);
    public static readonly Planet JUPITER = new Planet(4, 1898.0f, 142984, 1326, 23.1f);
    public static readonly Planet SATURN = new Planet(5, 568.0f, 120536, 687, 9.0f);
    public static readonly Planet URANUS = new Planet(6, 86.8f, 51118, 1271, 8.7f);
    public static readonly Planet NEPTUNE = new Planet(7, 102.0f, 49528, 1638, 11.0f);
    public static readonly Planet PLUTO = new Planet(8, 0.0146f, 2370, 2095, 0.7f);

    // This can be used to loop through all planets
    public static Planet[] ALL = new Planet[] {
        MERCURY, VENUS, EARTH, MARS, JUPITER, SATURN, URANUS, NEPTUNE, PLUTO
    };

    // Converts the specified id to a Planet instance
    public static Planet Convert(int id) {
        for(int i = 0; i < ALL.Length; ++i) {
            if(ALL[i].Id == id) {
                return ALL[i];
            }
        }

        // return ALL[id] could also work here but what if a non sequential id is used?

        throw new Exception("Cannot convert {0} to a Planet.".FormatWith(id));
    }

    ...
}

// Usage
Planet planet = Planet.Convert(someIntPlanet);

Want to convert from a string id? Add a string member variable that will hold this value. Instead of using an array such as ALL[], you can use a Dictionary like this:

private static Dictionary<string, Planet> ALL = new Dictionary<string, Planet>() {
    { MERCURY.TextId, MERCURY },
    { VENUS.TextId, VENUS },
    { EARTH.TextId, EARTH },
    ...
    { PLUTO.TextId, PLUTO },
};

// Converts the specified string to a Planet instance
public static Planet Convert(string id) {
    return ALL[id];
}

You can support any type of conversion that you like.

There’s so much more you can do. You can now add functions. You can do something like this:

Planet currentPlanet = Planet.VENUS;
currentPlanet.ApplyGravity(ship);

The coolest thing for me is you can specify different actions or behavior to the enum values. Something like this (It’s very contrived but you get the idea.):

public static readonly Planet EARTH = new Planet(2, 5.97f, 12756, 5514, 9.8f, delegate(Ship ship) {
    // Actions on land of ship
    ship.AddFood(1000);
    ship.RetireCrew();
    ship.RecruitNewCrew();
});

public static readonly Planet MARS = new Planet(3, 0.642f, 6792, 3933, 3.7f, delegate(Ship ship) {
    // Actions on land of ship
    ship.DeductFood(50);
    ship.Research();
    ship.Mine();
});

By simply turning your enum into a class, you’ve upgraded it to something more organized yet also more feature packed. You could also use advance features like reflection and inheritance, but most of the time, you don’t need to.

That’s it for now. Hope this helps.