A Better A* Interface

I have made my own A* framework around 2012. I still use the same code until now. It has been improved a lot of times already but one thing that didn’t change much is the interface. What I mean by ‘interface’ here is the collection of public classes, methods, and interfaces to access the framework. In this post, I’m showing how I did it. First, I’ll show you the method that executes the A* search:

public void ResolvePath(AStarPath<T> path, NodeHandle<T> start, NodeHandle<T> goal, HeuristicCostCalculator<T> calculator, Reachability<T> reachability = null) {
    ...
}

I’m no longer showing the code on how A* works. There are better resources about that else where. What I’m focusing here is the interface and why I did it that way.

First thing you’ll notice is I’m using a generic type variable. This is important because each game or problem domain has its own way of representing their positions be it tiles or waypoints. It also implies that the algorithm should work for any type of position.

Let’s discuss the parameters:

AStarPath<T> path – This is simply a custom class that contains the sequence of T items. It also stores whether the path is reachable or not. The method stores the A* search result in this object. It was done this way to prevent instantiating this class whenever an A* search is needed. The client code is required to maintain an instance of this class. The caller would then also use this class to traverse the path.

NodeHandle<T> start – The start node (self explanatory). NodeHandle has methods related to adding neighbors or connected nodes. Underneath, it is implemented as a graph node.

NodeHandle<T> goal – The goal node (self explanatory).

HeuristicCostCalculator<T> calculator – This is the cost calculator to use during algorithm execution. Will explain the details later. I provided this interface because different problem domains have different implementations of such calculation.

Reachability<T> reachability – Reachability is an interface that tells whether or not a tile is reachable or not. Will explain the details later. I provided such interface because there are instances where reachability differs in case to case basis. Reachability is optional because it may not be needed at all times. Most use cases for A* is really just shortest path.

HeuristicCostCalculator

HeuristicCostCalculator is implemented as an interface. It looks like this:

public interface HeuristicCostCalculator<T> {
    // Computes the heuristic cost from the specified starting position and the goal.
    float ComputeCost(T start, T goal);
}

Basically, it just computes the heuristic cost given two arbitrary positions. As you know, the heuristic cost is important to the algorithm. Without it, it’s no longer A*.

Reachability

Reachability is also implemented as an interface:

public interface Reachability<T> {
    // A reachability check on a single position
    // This is used to check if a goal is reachable at all
    // If not, the search ends abruptly
    // This is to avoid useless search when the position can't be reached at all
    bool IsReachable(T position);

    // Returns whether or not the specified movement of these two nodes is reachable. 
    bool IsReachable(T from, T to);
}

There are two methods. The first one with only one position parameter is used as a quick check to see if the position is reachable at all. For example, in a tile based game, if the tile has a blocker and thus unreachable, A* can be dropped. Or in another case, a tile is free but all its neighbors are blocked. If the position is not reachable, A* search may no longer be needed.

The second one with two parameters is used while the algorithm is being executed. The parameters are not necessarily the starting and goal nodes. The parameters passed here are nodes that are considered during the A* search. There are times where reachability changes for a certain pair of nodes.

For example, in our game Academia, when a tile is blocked, an agent can’t move diagonally along the tile’s diagonal neighbors like moving from bottom neighbor to right neighbor. The agent has to go through the right neighbor of the bottom tile first then move up to the right neighbor of the newly blocked tile. The bottom neighbor and the right neighbor are reachable by their graph node setup (we allow diagonal movement). By using the reachability interface, we can override this rule and mark the pair as unreachable because of the presence of such blocked tile.

Reachability
Moving from chair to water fountain is not allowed because of the wall. Agent has to go from chair to trash can then to water fountain.

Different games may also have game rules that override the graph node setup. A simple example I can think of is say land tile that is connected to a water tile. Only amphibian units can pass from land to water but not land only units. The Reachability interface can be used to express these rules.

Usage

This is how our A* class is then used:

// Say Tile is our "position" class
AStar<Tile> astar = new AStar();

// Prepare node handles
foreach(Tile tile in tileMap) {
    tile.NodeHandle = astar.Add(tile);
}

// Prepare neighbors
foreach(Tile tile in tileMap) {
    foreach(Tile neighbor in tile.Neighbors) {
        // TileLink here stores the weight for moving from the tile to the neighbor
        tile.NodeHandle.AddPath(neighbor.Handle, new TileLink(tile, neighbor));
    }
}

// To use A* search
AStarPath<Tile> path = new AStarPath<Tile>();
Tile start = ResolveStartTile(); // Let's just say this is how we resolve the starting tile
Tile goal = ResolveGoalTile();

// Say MyHeuristicCalculator and MyReachability are implemented as singletons
astart.ResolvePath(path, start.NodeHandle, goal.NodeHandle, MyHeuristicCalculator.Instance, MyReachability.Instance);

// Use the path of reachable or do something else if not
if(path.Reachable) {
    foreach(Tile tile in path) {
        MoveToTile(tile); // It's not really like this in an actual game but you get the idea
    }
}

GOAP Extensions

We use GOAP for our AI in Academia. It’s working well for us so far. New behaviour can be easily added and AI glitches can be easily fixed. But I’ve had problems with it as well. One of them which needed fixing is duplicate actions.

We use GOAP for our AI in Academia. It’s working well for us so far. New behaviour can be easily added and AI glitches can be easily fixed. But I’ve had problems with it as well. One of them which needed fixing is duplicate actions.

We have different classes of characters in the game. We have students, teachers, workers, cooks, nurses, and janitors (more are coming). Each of them have different set of actions but most of the time, they also have a common set of actions. For example, eating. If I fix the eating behavior in students, that means that I have to also apply the same fix to the other classes. This is maintenance nightmare. A character class could become broken if I just happen to forget a certain fix. Applying a fix to each of the classes is tedious, too.

GOAP Actions Refactoring

I needed a way to refactor GOAP actions in such that I could just edit one set of actions and it will be applied to all the character classes. Thus, I introduced “extensions” to our GOAP framework.

GoapExtensions

An extension is basically a reference to another GOAP data set. During parsing of this data, the system adds all the actions found in the extension. An extension can also have a set of preconditions. These preconditions are added to all the actions in the extension. For example, from the image above, the NeedsBehaviour extension will only be executed if HasRequestedTreatment = false.

The refactored actions that pertains to “needs” are now placed in its separate GOAP data set:

NeedsBehaviour

The specific GOAP data for each character class can simply reference this “needs” data to be able to execute such actions. I only need to fix this “needs” GOAP data if I have to fix a “needs” related behavior. No need to apply separate fixes to each character class.

This feature turned out to be very useful. Every time there’s a new set of behaviour that could be reused, I put them in a separate GOAP data. The character class that requires it could just add this new data as an extension. For example, for now, only students can use computers. So I made a separate GOAP data called “ComputerLabBehaviour” and added it as extension to the student’s GOAP data. Later on, if we decide that teachers could also use computers, I can simply add the “ComputerLabBehaviour” data as extension to the teacher’s GOAP data.

Behaviours
Our current set of behaviours

Avoiding Expensive A*

The most expensive A* searches are the ones that cannot be reached. It’s a paradox when using A* based path-finding algorithms. You want to know the most optimal path. At the same time, you also want to know if a certain position  is reachable before you execute it. Fortunately, there is such a way.

The most expensive A* searches are the ones that cannot be reached. It’s a paradox when using A* based path-finding algorithms. You want to know the most optimal path. At the same time, you also want to know if a certain position  is reachable before you execute it. Fortunately, there is such a way.

I don’t know the exact term for this. It’s called many names, “flood fill”, “seed fill”, or “connection labeling”. The concept is to “label” connected tiles. Label here simply means setting a unique int value for a set of connected tiles. Before invoking A*, you can simply compare the labels of the start and destination tile. If they are the same, then it means they are reachable and A* can proceed. If not, then they are unreachable and you can entirely skip expensive search. That’s pretty neat for something as simple as comparing integers.

ConnectionLabeling
Here’s an example from our current game Academia. 2 tiles can’t reach the 4 tiles.

This is the flood filler algorithm that I’ve used (Note that this is a pseudocode. It’s not a valid C# code):

Queue disconnectedQueue;
Queue connectedQueue;

int currentLabel = 1;

// startingTile could be anything from random tile
// or a tile that has recently been updated. Any tile in a map works.
connectedQueue.Enqueue(startingTile);

while(connectedQueue.Count > 0 || disconnectedQueue.Count > 0) {
    ProcessConnectedQueue();
    ProcessDisconnectedQueue();
}

void ProcessConnectedQueue() {
    while(connectedQueue.Count > 0) {
        Tile tile = connectedQueue.Dequeue();
        if(tile is already labeled) {
            continue;
        }

        bool hasBlocker = tile.HasBlocker; // Whether or not it blocks movement

        // Label the tile
        if(hasBlocker) {
            tile.Label = 0; // Assign zero for blockers
        }

        // Add each neighbor
        foreach(Tile neighbor in tile) {
            AddToQueue(neighbor, hasBlocker);
        }
    }
}

// Identify an unlabeled tile and add it to the connectedQueue for processing
void ProcessDisconnectedQueue() {
    while(disconnectedQueue.Count > 0) {
        Tile tile = disconnectedQueue.Dequeue();
        if(tile is already labeled) {
            // We are not interested in labeled tiles
            continue;
        }

        if(!tile.HasBlocker) {
            // Update the label if the next tile to process is not a blocker
            ++currentLabel;
        }

        connectedQueue.Enqueue(tile);
        break;
    }
}

void AddToQueue(Tile tile, bool hasBlocker) {
    if(tile is already labeled) {
        // No need to add to queue since it was already labeled
        return;
    }

    if(tile.HasBlocker == hasBlocker) {
        // Has the same blocker value. They are connected.
        connectedQueue.Enqueue(tile);
    } else {
        // Not the same blocker value. They are disjoint.
        disconnectedQueue.Enqueue(tile);
    }
}

The tile labels are updated every time a tile becomes a blocker like when a wall is built on it. Our implementation of the algorithm runs in multiple frames so that the game wouldn’t slow down whenever something is built. We process X number of tiles per frame. I’m still adjusting the right number.

What do you think? Let me know how you can make the flood algorithm run faster.

Our Tile Class

The foundation of every tile based games is the structure of their tile model. I think our game Academia has come a long way in the evolution of its tile structure. It’s robust enough to be shown to the public. If you’re making a tile based game, hopefully this helps you.

I’ll show you the member variables first:

public class Tile {

    private readonly TileGrid grid; // The parent grid
    private readonly Cell cell;

    private NodeHandle handle;

    // Note here that this is a map of layer and TileItem pair
    private Dictionary<string, TileItem> itemMap = new Dictionary<string, TileItem>();

    // Note here that task items are added to a separate map which uses its BuildItem as the key
    private Dictionary<string, TileItem> taskItemMap = new Dictionary<string, TileItem>();

    private readonly IntVector2 position;

    private int itemFlags;

    // May be null if tile is not in any zone
    private Zone zone;

    ...
}

We included the parent TileGrid so we could easily find other tiles whenever we have a reference to one tile. This helps a lot when neighbor tiles are needed. cell contains the information of the world tile position, cell width and height, bottom left position, top right position, etc. NodeHandle handle acts as the node in our A* framework. position is the tile position using integer coordinates.

The dictionaries itemMap and taskItemMap are used to store TileItem instances in each layer. A TileItem contains information about a tile. For example, say a TileItem instance for a table object. This TileItem instance means that a table occupies this tile. It also contains information like if the tile is blocked or not (for example wall). A Tile can have multiple TileItem instances for cases like the tile having a floor, dirt, and an object on top of it. Each of this are in different layers. The use of dictionary also implies that there can only be one TileItem per layer. This helps in checking if a tile already contains an item in a certain layer. This is usually used for preventing players from building objects on tiles that already have existing objects.

We differentiate between normal items and task items. Normal built items are added on itemMap while task items go on taskItemMap. This differentiation is needed so that items in multiple layers can be built in the tile. For example, build floor then build a table on top of it. Tasks used to be stored in only one layer but we found this to be inadequate, thus the current implementation.

itemFlags is a bitmask containing a bunch of information like if the tile is blocked or not, does it block students or not, does it contain a “character slot”, orientation of the character if one uses the slot. We specifically used a bitmask for faster “reachable” checking during A* search.

zone is the Zone instance where the tile is located. We added this for optimization purposes. We had to query the zone in a certain tile position in the old implementation which is slow and not very ideal.

Here’s the full class (removed function comments because “<text>” like these messes WordPress’ formatting):

public class Tile {

        private readonly TileGrid grid; // The parent grid
        private readonly Cell cell;

        private NodeHandle handle;

        // Note here that this is a map of layer and TileItem pair
        private Dictionary<string, TileItem> itemMap = new Dictionary<string, TileItem>();

        // Note here that task items are added to a separate map which uses its BuildItem as the key
        private Dictionary<string, TileItem> taskItemMap = new Dictionary<string, TileItem>();

        private readonly IntVector2 position;

        private int itemFlags;

        // May be null if tile is not in any zone
        private Zone zone;

        public Tile(TileGrid grid, Cell cell) {
            this.grid = grid;
            this.cell = cell;
            this.position = new IntVector2(cell.x, cell.y);
        }

        public void Add(TileItem tileItem) {
            if (tileItem.Data.WorkerTask) {
                Assertion.Assert(!this.taskItemMap.ContainsKey(tileItem.BuildItemData.TileLayer));
                this.taskItemMap[tileItem.BuildItemData.TileLayer] = tileItem;
            } else {
                Assertion.Assert(!this.itemMap.ContainsKey(tileItem.Layer)); // Should not contain the item yet
                this.itemMap[tileItem.Layer] = tileItem;

                // Add the flag as well
                this.itemFlags |= tileItem.Flags;
            }
        }

        public void Remove(string tileLayer, string tileItemId) {
            // Can't remove task items through this method
            // Use RemoveTask() instead
            Assertion.Assert(!TileLayers.TASKS.EqualsString(tileLayer));

            TileItem item = null;
            Assertion.Assert(this.itemMap.TryGetValue(tileLayer, out item)); // Item should exist

            Assertion.Assert(item.Data.ItemId.Equals(tileItemId));
            this.itemMap.Remove(tileLayer);
            Assertion.Assert(!Contains(tileLayer));

            // Recreate the flags from the existing items
            this.itemFlags = 0;
            foreach (KeyValuePair<string, TileItem> entry in this.itemMap) {
                this.itemFlags |= entry.Value.Flags;
            }
        }

        public void RemoveTask(string buildItemLayer, string tileItemId) {
            // Must be the layer of the built item
            Assertion.Assert(!TileLayers.TASKS.EqualsString(buildItemLayer));

            TileItem taskItem = this.taskItemMap.Find(buildItemLayer);
            Assertion.AssertNotNull(taskItem);

            Assertion.Assert(taskItem.Data.ItemId.Equals(tileItemId));
            this.taskItemMap.Remove(buildItemLayer);
        }

        public bool Contains(string layerName) {
            // Note that a task layer is no longer just one item
            // It's a layer of items by itself
            Assertion.Assert(!TileLayers.TASKS.Equals(layerName));
            return this.itemMap.ContainsKey(layerName);
        }

        public bool ContainsTask(string layerName) {
            // Note that a task layer is no longer just one item
            // It's a layer of items by itself
            Assertion.Assert(!TileLayers.TASKS.Equals(layerName));
            return this.taskItemMap.ContainsKey(layerName);
        }

        public TileItem GetItem(string layerName) {
            // Note that a task layer is no longer just one item
            // It's a layer of items by itself
            Assertion.Assert(!TileLayers.TASKS.Equals(layerName));

            TileItem item = null;
            this.itemMap.TryGetValue(layerName, out item);

            // This may return null
            // Client code should check for this
            return item;
        }

        public TileItem GetTaskItem(string layerName) {
            // Note that a task layer is no longer just one item
            // It's a layer of items by itself
            Assertion.Assert(!TileLayers.TASKS.Equals(layerName));
            return this.taskItemMap.Find(layerName);
        }

        public bool HasTaskItems {
            get {
                return this.taskItemMap.Count > 0;
            }
        }

        public TileItem GetTopTaskItem() {
            if(!HasTaskItems) {
                return null;
            }

            for(int i = 0; i < TileLayers.ORDERED_LAYERS.Length; ++i) {                 
                TileItem taskItem = GetTaskItem(TileLayers.ORDERED_LAYERS[i]);                 
                if(taskItem != null) {                     
                    return taskItem;                 
                }             
            }             
            return null;         
        }         

        public bool HasCharacterSlot {             
            get {                 
                return (this.itemFlags & TileItemLayout.CHARACTER_SLOT) > 0;
            }
        }

        public bool HasPhysicalBlocker {
            get {
                return (this.itemFlags & TileItemLayout.PHYSICAL_BLOCKER) > 0;
            }
        }

        public bool HasStudentBlocker {
            get {
                return (this.itemFlags & TileItemLayout.STUDENT_BLOCKER) > 0;
            }
        }

        internal NodeHandle Handle {
            get {
                return handle;
            }

            set {
                this.handle = value;
            }
        }

        public Cell Cell {
            get {
                return cell;
            }
        }

        public TileGrid Grid {
            get {
                return grid;
            }
        }

        public IntVector2 Position {
            get {
                return position;
            }
        }

        public Zone Zone {
            get {
                return zone;
            }

            set {
                zone = value;
            }
        }

        public bool Contains(Vector3 worldPosition) {
            return this.cell.Contains(worldPosition);
        }

    }

More writing, less complaining

I’ve been slacking with writing stuff for this blog for months now. To be honest, I’m not very good with writing and it takes me a while to compose my thoughts. I spend around 3-4 hours to finish one post. In addition, English is not really my strong language.

To remedy this, I’ve decided that I should allow myself to post short entries. It’s probably not so bad as I think it would. Content will be not as thorough as before but at least I don’t slack off with posting here at least once a week.

There’s a lot that’s happened since last post. I’ve been working hard with our next game called Academia: School Simulator. We’ve made so much progress, it’s unbelievable! This is how it looks now.

If you like this type of game, do follow us. The links are in the game website. Our company website is here.

GOAP For Our New Game

I’m excited that we’re making a builder type of game in the likes of Prison Architect Banished, and Rimworld. I love playing such games. Our’s is a school management game where you can design classrooms, offices, hire teachers, design curriculum, and guide students to their educational success.

I’m excited that we’re making a builder type of game in the likes of Prison Architect Banished, and Rimworld. I love playing such games. Our’s is a school management game where you can design classrooms, offices, hire teachers, design curriculum, and guide students to their educational success.

currentgamescreenshot

For every new game, it’s always my aim to try to implement a new algorithm or system and learn something new. I’ve always been fascinated with an AI planning system called Goal Oriented Action Planning or GOAP. If you’re not familiar with it, here’s a simple tutorialI haven’t developed such system myself as the games that I’ve made so far have no use for it. I think it’s the perfect AI system for builder games. I hope I’m right.

Why GOAP

The primary reason is I’m lazy. I don’t want to wire and connect stuff like you do with Finite State Machines and Behaviour Trees. I just want to provide a new action and my agents will use it when needed. Another main reason is I’ve reckoned that there’s going to be a lot of action order combinations in the game. I don’t want to enumerate all of those combinations. I want the game agents to just discover them and surprise the player.

Another important reason is the AI system itself is an aide for development. There’s going to be lots of objects in the game that the agents may interact with. While I’m adding them one by one, I’ll just add the actions that can be done with the object and the agents will do the rest. I don’t have to reconfigure them much every time there’s a new action available. Just add the action and it’s done.

Some Tweaks

While making the system, I had some ideas that would make the generic GOAP system better. They sure have paid off.

Multiple Sequenced Actions

Per GOAP action, instead of doing only one action, our custom GOAP action contains a set of modular atomic actions. Each atomic action is executed in sequence. This is what it looks like in editor:

multipleactions

By doing it this way, I can make reusable atomic actions that can be used by any agent. A GOAP action then is just a named object that contains preconditions, effects, and a set of atomic actions.

GoapResult

I incorporated the concept of action results like how it is in Behaviour Trees. An atomic action execution returns either SUCCESS, FAILED, or RUNNING. This is what the atomic action base class looks like:

public abstract class GoapAtomAction {

    public virtual void ResetForPlanning(GoapAgent agent) {
    }

    public virtual bool CanExecute(GoapAgent agent) {
        return true;
    }

    public virtual GoapResult Start(GoapAgent agent) {
        return GoapResult.SUCCESS;
    }

    public virtual GoapResult Update(GoapAgent agent) {
        return GoapResult.SUCCESS;
    }

    public virtual void OnFail(GoapAgent agent) {
    }

}

When an atom action returns FAILED, the whole current plan fails and the agent will plan again. A RUNNING result means that the current action is still running, thus also means that the current plan is still ongoing. A SUCCESS result means that the action has done its execution and can proceed to the next atomic action. When all of the atomic actions returned SUCCESS, the whole GOAP action is a success and the next GOAP action in the plan will be executed.

This concept makes it easy for me to add failure conditions while an action is being executed. Whenever one action fails, the agent automatically replans and proceeds to execute its new set of actions.

Condition Resolver

Condition Resolvers are objects that can query current world conditions which you need during planning. I implemented this as another base class in our system. The concrete classes can then be selectable in the editor. This is what the base class looks like:

public abstract class ConditionResolver {

    private bool resolved;
    private bool conditionMet;

    public ConditionResolver() {
        Reset();
    }

    public void Reset() {
        this.resolved = false;
        this.conditionMet = false;
    }

    public bool IsMet(GoapAgent agent) {
        if(!this.resolved) {
            // Not yet resolved
            this.conditionMet = Resolve(agent);
            this.resolved = true;
        }

        return this.conditionMet;
    }

    protected abstract bool Resolve(GoapAgent agent);

}

Note here that it has logic such that Resolve() will only be invoked once. Concrete subclasses need to only override this method. Such method may execute complex calculations so we need to make sure that it’s only called once when needed during planning.

This is what it looks like in editor:

conditionresolvers

All conditions default to false unless they have a resolver which is used to query the actual state of the condition.

Usage

Once the conditions, resolvers, and actions have been set up, all that’s left to do is to add goal conditions and invoke Replan().

void Start() {
    this.agent = GetComponent();
    Assertion.AssertNotNull(this.agent);

    // Start the AI
    this.agent.ClearGoals();
    this.agent.AddGoal("StudentBehaviour", true);
    this.agent.Replan();
}

If there are new goals to satisfy, the same calls can be invoked to change the goal(s) for a new plan to be executed.

So Far So Good

Our custom GOAP system is working well for us… for now. I now have working worker agents and student agents. More will be added. Here’s hoping that we don’t need to revamp the system as we’re already so deep with it.

I Created a Politically Corrupt AI

(This was originally posted in Squeaky Wheel’s website.)

(This was originally posted in Squeaky Wheel’s website.)

I’ve been using Genetic Algorithm as an aide for game design and development. It fills me with excitement that I can simulate natural selection to help me look for the best solutions to problems. Now I’ll tell you the story of how I used it to improve our AI.

My GA knowledge is still somewhat limited. I learned how to write code for it using this website (Go read it. It’s awesome! So unlike smarty academic papers.) To give you an idea in casual speak, GA is basically simulating the theory of evolution to look for the most “fit” solution. In this simulation, the species or “individuals” are the solutions to the problem. At the start, a certain number of individuals are spawned with random configurations. As such, most of them are dumb solutions at the beginning. Each of them are then assessed and given a fitness score. The ones with higher score means they are closer to the solution. Based from this initial population, we spawn the next generation. To do that, we either mutate or let them breed (Yes, solutions can mate) with a rule that those with higher fitness score has a higher chance of being selected for mutation or breeding. With this new generation, we repeat the process of assessment and spawning the new generation until we can find that one individual that solves our problem.

When coding a GA, you need some important pieces. These are the individual representation, the fitness assessment function, mutation function, and crossover function. If you have these four, you can run a GA. Individual representation is a bit tricky. How do you represent a solution that can also be mutated and bred? One of the most common is a list of bits. This can be represented by a list of booleans or just integers and use bit manipulation. Mutation is just then flipping a random number of bits. Breeding or crossover is simply exchanging a certain number of bits from two individuals.

Representation by bits is the only representation I know of. It’s what AI Junkie taught me and I sticked with it. That’s until I’ve read a book called “Essentials of Metaheuristics”, a highly recommended book. The contents are written in an informal way, not in an academic bullshit way. It’s a primer on different algorithms in the field of metaheuristics. Most of it, though, is GA. From there, I learned that you can represent an individual with anything. It can be lists, trees, graphs, your own data structure. Mutation and crossover can be any made up alteration of your representation. It can be adding a child, removing a child, changing a value, swapping nodes and edges. Anything! I realized how dumb I was for not going to that thought.

That gave me an aha moment. What if I automate the creation of our AI using GA. Our AI configuration is very simple. At the same time, AI is also the most neglected part of our game. We haven’t touched it for a long time. We have a working AI that I configured by hand. But then, our mechanics have already changed too much that we don’t know if it’s still competitive. Configuring a new AI would take time.

My team gave me a week to work on this, May 2-8, 2016. I’m not so sure if it would work. What if looking for a better AI takes time, like it would take days running the simulation. I certainly thought so because the assessment function is to let two AI players pit against each other. The one who wins has a bigger fitness score. Now, a single playthrough takes time, even if I speed it up. The point is, making the GA could be a waste of time.

The first thing I did is I made a fast mode of our game. No animations, movement becomes teleportation, removed the standby/wait times, etc. It wasn’t easy. I don’t have time to write another version of the game solely for GA. Instead, I’m using what we have now and provided a mode that it can be played extremely fast. Finally, I have a mode where AI vs AI takes around 1 minute to complete 15 turns. Still not fast enough, but quite good already.

Then I made something called a “multi frame” GA. Basically, it’s GA that is spread in multiple frames. Note that the assessment function is for AI to play the game. So the GA must wait for such game to end before it can move to the rest of the algorithm. In fact, if there are 10 indiduals in a generation, 10 games must be played before moving on to spawn the next generation.

Our AI configuration is all code. We use a utility based AI, by the way. It is represented as a set of “task scorers”. Each scorer has an arbitrary set of “considerations”. These considerations are classes that help compute the score for the task. The AI generates all the possible tasks, scores them using the scorers, and picks the one with the highest value.

My plan is to use GA to generate different combinations of these scorers and considerations until we get the one that beats the current best configuration. Before anything else, I needed my configuration to be saved in a file. Every time the GA finds a better AI, it should save the configuration in a file. So I turned the AI configuration into XML. I used the class names and variables of consideration classes in this file. I load them back using reflection. It looks like this now:

<ScorerSet id="IndividualZero" fitness="1" timestamp="">

 <Scorer taskId="Bribe">
   <Consideration name="BlockadedDistrictConsideration" />
   <Consideration name="TerrorizedConsideration" />

   <Consideration name="MinimumBudgetConsideration">
     <Variable type="NamedInt" name="minimumBudget" value="1000" />
   </Consideration>

   <Consideration name="ReachableConsideration">
     <Variable type="NamedFloat" name="multiplierIfUnreachable" value="0" />
   </Consideration>

   <Consideration name="MustHaveMatchingPlatformConsideration" />

   <Consideration name="ReachReputationConsideration">
     <Variable type="NamedFloat" name="populationPercentage" value="0.85" />
   </Consideration>

   <Consideration name="BonusTuningConsideration">
     <Variable type="NamedFloat" name="bonusToSet" value="1.0" />
   </Consideration>

   <Consideration name="CommandPointsConsideration" />

   <Consideration name="NeighborCountConsideration">
     <Variable type="NamedInt" name="desirableCount" value="4" />
   </Consideration>

   <Consideration name="OpponentStaffCountConsideration" />

   <Consideration name="BribeBetterThanCampaignConsideration">
     <Variable type="NamedInt" name="minReputationGainDifference" value="1000" />
     <Variable type="NamedInt" name="rankIfMet" value="17" />
     <Variable type="NamedInt" name="bonusIfMet" value="10" />
     <Variable type="NamedFloat" name="multiplierIfMet" value="5" />
   </Consideration>

   <Consideration name="ScandalCountReachedConsideration">
     <Variable type="NamedInt" name="scandalCount" value="4" />
     <Variable type="NamedFloat" name="multiplierIfMet" value="0" />
   </Consideration>
 </Scorer>

 <Scorer taskId="RaiseFunds">
   <Consideration name="BlockadedDistrictConsideration" />
   <Consideration name="TerrorizedConsideration" />

   <Consideration name="ReachableConsideration">
     <Variable type="NamedFloat" name="multiplierIfUnreachable" value="0" />
   </Consideration>

   <Consideration name="HasSignificantFundsConsideration">
     <Variable type="NamedInt" name="preferredAmountToRaise" value="1000" />
   </Consideration>

   <Consideration name="BonusTuningConsideration">
     <Variable type="NamedFloat" name="bonusToSet" value="1.0" />
   </Consideration>

   <Consideration name="LowFundsConsideration">
     <Variable type="NamedFloat" name="fundsThreshold" value="0.3" />
   </Consideration>

   <Consideration name="CommandPointsConsideration" />
   <Consideration name="HigherReputationConsideration" />

   <Consideration name="NeighborCountConsideration">
     <Variable type="NamedInt" name="desirableCount" value="4" />
   </Consideration>

   <Consideration name="NearBailiwickConsideration" />
 </Scorer>

 <!-- ... Rest is ommitted -->
</ScorerSet>

The mutation function then is just a matter of:

  • Add random consideration
  • Remove random consideration
  • Change some random variables

Crossover between two individuals is simply swapping a random number of considerations.

For the initial population, I used our existing AI as a starting point. I called it “Individual Zero”. The first population are individuals that are mutated versions of him. Now with all the pieces together, I have a GA that looks for a better AI.

First Runs

When I was about to finish, I was too excited to run it. I know it will take time so I plan to run the simulation overnight when I’m asleep. It was Saturday night. I’m sleepy and tired. I fixed some errors that would potentially halt the simulation. I set the settings for the fast mode. Every game lasts only 15 turns. The population size is set to only 10. Then finally I let the baby run. I watched the first few AI matches. After a while, nothing goes wrong, I locked the laptop and went to bed.

When I woke up, I checked it immediately. The simulation had hung. I looked at the Task Manager. 90% RAM usage. So I closed it. But lo and behold, it generated 10 AI configurations, each one better than the last one before it. So I was like “that went well”. I pushed my work and restarted my laptop. Then my laptop won’t start. It only said “Diagnosing Your PC”. But I wasn’t worried. I know I didn’t do anything stupid. The simulation just probably messed up the memory. An hour later, my laptop was alive again. I immediately fixed the memory leak. Lesson learned when writing a resource intensive GA like this one.

After coffee, I decided to play a game against the new AI and see if it’s really indeed better. Maybe it’s just better against other AI but not humans. So I used my usual strategy. Bribed supporters on the first few turns. I acquired districts as soon as I can. I was winning. Every 5 turns, the game shows this graph about how many supporters each candidate has. I beat his numbers every time. I even have more acquired districts than him. Am I really playing against a better AI? On the last turn, I’ve made my actions. I knew I was gonna win. I controlled more districts than him. That dread that I probably just wasted time returned again. Election came… the AI won an unbelievable landslide. Jaw dropped. Turns out he didn’t care about the voters and was befriending more patrons than me. Well, I guess I created a monster.