GOAP Extensions

We use GOAP for our AI in Academia. It’s working well for us so far. New behaviour can be easily added and AI glitches can be easily fixed. But I’ve had problems with it as well. One of them which needed fixing is duplicate actions.


We use GOAP for our AI in Academia. It’s working well for us so far. New behaviour can be easily added and AI glitches can be easily fixed. But I’ve had problems with it as well. One of them which needed fixing is duplicate actions.

We have different classes of characters in the game. We have students, teachers, workers, cooks, nurses, and janitors (more are coming). Each of them have different set of actions but most of the time, they also have a common set of actions. For example, eating. If I fix the eating behavior in students, that means that I have to also apply the same fix to the other classes. This is maintenance nightmare. A character class could become broken if I just happen to forget a certain fix. Applying a fix to each of the classes is tedious, too.

GOAP Actions Refactoring

I needed a way to refactor GOAP actions in such that I could just edit one set of actions and it will be applied to all the character classes. Thus, I introduced “extensions” to our GOAP framework.


An extension is basically a reference to another GOAP data set. During parsing of this data, the system adds all the actions found in the extension. An extension can also have a set of preconditions. These preconditions are added to all the actions in the extension. For example, from the image above, the NeedsBehaviour extension will only be executed if HasRequestedTreatment = false.

The refactored actions that pertains to “needs” are now placed in its separate GOAP data set:


The specific GOAP data for each character class can simply reference this “needs” data to be able to execute such actions. I only need to fix this “needs” GOAP data if I have to fix a “needs” related behavior. No need to apply separate fixes to each character class.

This feature turned out to be very useful. Every time there’s a new set of behaviour that could be reused, I put them in a separate GOAP data. The character class that requires it could just add this new data as an extension. For example, for now, only students can use computers. So I made a separate GOAP data called “ComputerLabBehaviour” and added it as extension to the student’s GOAP data. Later on, if we decide that teachers could also use computers, I can simply add the “ComputerLabBehaviour” data as extension to the teacher’s GOAP data.

Our current set of behaviours

GOAP For Our New Game

I’m excited that we’re making a builder type of game in the likes of Prison Architect Banished, and Rimworld. I love playing such games. Our’s is a school management game where you can design classrooms, offices, hire teachers, design curriculum, and guide students to their educational success.

I’m excited that we’re making a builder type of game in the likes of Prison Architect Banished, and Rimworld. I love playing such games. Our’s is a school management game where you can design classrooms, offices, hire teachers, design curriculum, and guide students to their educational success.


For every new game, it’s always my aim to try to implement a new algorithm or system and learn something new. I’ve always been fascinated with an AI planning system called Goal Oriented Action Planning or GOAP. If you’re not familiar with it, here’s a simple tutorialI haven’t developed such system myself as the games that I’ve made so far have no use for it. I think it’s the perfect AI system for builder games. I hope I’m right.


The primary reason is I’m lazy. I don’t want to wire and connect stuff like you do with Finite State Machines and Behaviour Trees. I just want to provide a new action and my agents will use it when needed. Another main reason is I’ve reckoned that there’s going to be a lot of action order combinations in the game. I don’t want to enumerate all of those combinations. I want the game agents to just discover them and surprise the player.

Another important reason is the AI system itself is an aide for development. There’s going to be lots of objects in the game that the agents may interact with. While I’m adding them one by one, I’ll just add the actions that can be done with the object and the agents will do the rest. I don’t have to reconfigure them much every time there’s a new action available. Just add the action and it’s done.

Some Tweaks

While making the system, I had some ideas that would make the generic GOAP system better. They sure have paid off.

Multiple Sequenced Actions

Per GOAP action, instead of doing only one action, our custom GOAP action contains a set of modular atomic actions. Each atomic action is executed in sequence. This is what it looks like in editor:


By doing it this way, I can make reusable atomic actions that can be used by any agent. A GOAP action then is just a named object that contains preconditions, effects, and a set of atomic actions.


I incorporated the concept of action results like how it is in Behaviour Trees. An atomic action execution returns either SUCCESS, FAILED, or RUNNING. This is what the atomic action base class looks like:

public abstract class GoapAtomAction {

    public virtual void ResetForPlanning(GoapAgent agent) {

    public virtual bool CanExecute(GoapAgent agent) {
        return true;

    public virtual GoapResult Start(GoapAgent agent) {
        return GoapResult.SUCCESS;

    public virtual GoapResult Update(GoapAgent agent) {
        return GoapResult.SUCCESS;

    public virtual void OnFail(GoapAgent agent) {


When an atom action returns FAILED, the whole current plan fails and the agent will plan again. A RUNNING result means that the current action is still running, thus also means that the current plan is still ongoing. A SUCCESS result means that the action has done its execution and can proceed to the next atomic action. When all of the atomic actions returned SUCCESS, the whole GOAP action is a success and the next GOAP action in the plan will be executed.

This concept makes it easy for me to add failure conditions while an action is being executed. Whenever one action fails, the agent automatically replans and proceeds to execute its new set of actions.

Condition Resolver

Condition Resolvers are objects that can query current world conditions which you need during planning. I implemented this as another base class in our system. The concrete classes can then be selectable in the editor. This is what the base class looks like:

public abstract class ConditionResolver {

    private bool resolved;
    private bool conditionMet;

    public ConditionResolver() {

    public void Reset() {
        this.resolved = false;
        this.conditionMet = false;

    public bool IsMet(GoapAgent agent) {
        if(!this.resolved) {
            // Not yet resolved
            this.conditionMet = Resolve(agent);
            this.resolved = true;

        return this.conditionMet;

    protected abstract bool Resolve(GoapAgent agent);


Note here that it has logic such that Resolve() will only be invoked once. Concrete subclasses need to only override this method. Such method may execute complex calculations so we need to make sure that it’s only called once when needed during planning.

This is what it looks like in editor:


All conditions default to false unless they have a resolver which is used to query the actual state of the condition.


Once the conditions, resolvers, and actions have been set up, all that’s left to do is to add goal conditions and invoke Replan().

void Start() {
    this.agent = GetComponent();

    // Start the AI
    this.agent.AddGoal("StudentBehaviour", true);

If there are new goals to satisfy, the same calls can be invoked to change the goal(s) for a new plan to be executed.

So Far So Good

Our custom GOAP system is working well for us… for now. I now have working worker agents and student agents. More will be added. Here’s hoping that we don’t need to revamp the system as we’re already so deep with it.

I Created a Politically Corrupt AI

(This was originally posted in Squeaky Wheel’s website.)

(This was originally posted in Squeaky Wheel’s website.)

I’ve been using Genetic Algorithm as an aide for game design and development. It fills me with excitement that I can simulate natural selection to help me look for the best solutions to problems. Now I’ll tell you the story of how I used it to improve our AI.

My GA knowledge is still somewhat limited. I learned how to write code for it using this website (Go read it. It’s awesome! So unlike smarty academic papers.) To give you an idea in casual speak, GA is basically simulating the theory of evolution to look for the most “fit” solution. In this simulation, the species or “individuals” are the solutions to the problem. At the start, a certain number of individuals are spawned with random configurations. As such, most of them are dumb solutions at the beginning. Each of them are then assessed and given a fitness score. The ones with higher score means they are closer to the solution. Based from this initial population, we spawn the next generation. To do that, we either mutate or let them breed (Yes, solutions can mate) with a rule that those with higher fitness score has a higher chance of being selected for mutation or breeding. With this new generation, we repeat the process of assessment and spawning the new generation until we can find that one individual that solves our problem.

When coding a GA, you need some important pieces. These are the individual representation, the fitness assessment function, mutation function, and crossover function. If you have these four, you can run a GA. Individual representation is a bit tricky. How do you represent a solution that can also be mutated and bred? One of the most common is a list of bits. This can be represented by a list of booleans or just integers and use bit manipulation. Mutation is just then flipping a random number of bits. Breeding or crossover is simply exchanging a certain number of bits from two individuals.

Representation by bits is the only representation I know of. It’s what AI Junkie taught me and I sticked with it. That’s until I’ve read a book called “Essentials of Metaheuristics”, a highly recommended book. The contents are written in an informal way, not in an academic bullshit way. It’s a primer on different algorithms in the field of metaheuristics. Most of it, though, is GA. From there, I learned that you can represent an individual with anything. It can be lists, trees, graphs, your own data structure. Mutation and crossover can be any made up alteration of your representation. It can be adding a child, removing a child, changing a value, swapping nodes and edges. Anything! I realized how dumb I was for not going to that thought.

That gave me an aha moment. What if I automate the creation of our AI using GA. Our AI configuration is very simple. At the same time, AI is also the most neglected part of our game. We haven’t touched it for a long time. We have a working AI that I configured by hand. But then, our mechanics have already changed too much that we don’t know if it’s still competitive. Configuring a new AI would take time.

My team gave me a week to work on this, May 2-8, 2016. I’m not so sure if it would work. What if looking for a better AI takes time, like it would take days running the simulation. I certainly thought so because the assessment function is to let two AI players pit against each other. The one who wins has a bigger fitness score. Now, a single playthrough takes time, even if I speed it up. The point is, making the GA could be a waste of time.

The first thing I did is I made a fast mode of our game. No animations, movement becomes teleportation, removed the standby/wait times, etc. It wasn’t easy. I don’t have time to write another version of the game solely for GA. Instead, I’m using what we have now and provided a mode that it can be played extremely fast. Finally, I have a mode where AI vs AI takes around 1 minute to complete 15 turns. Still not fast enough, but quite good already.

Then I made something called a “multi frame” GA. Basically, it’s GA that is spread in multiple frames. Note that the assessment function is for AI to play the game. So the GA must wait for such game to end before it can move to the rest of the algorithm. In fact, if there are 10 indiduals in a generation, 10 games must be played before moving on to spawn the next generation.

Our AI configuration is all code. We use a utility based AI, by the way. It is represented as a set of “task scorers”. Each scorer has an arbitrary set of “considerations”. These considerations are classes that help compute the score for the task. The AI generates all the possible tasks, scores them using the scorers, and picks the one with the highest value.

My plan is to use GA to generate different combinations of these scorers and considerations until we get the one that beats the current best configuration. Before anything else, I needed my configuration to be saved in a file. Every time the GA finds a better AI, it should save the configuration in a file. So I turned the AI configuration into XML. I used the class names and variables of consideration classes in this file. I load them back using reflection. It looks like this now:

<ScorerSet id="IndividualZero" fitness="1" timestamp="">

 <Scorer taskId="Bribe">
   <Consideration name="BlockadedDistrictConsideration" />
   <Consideration name="TerrorizedConsideration" />

   <Consideration name="MinimumBudgetConsideration">
     <Variable type="NamedInt" name="minimumBudget" value="1000" />

   <Consideration name="ReachableConsideration">
     <Variable type="NamedFloat" name="multiplierIfUnreachable" value="0" />

   <Consideration name="MustHaveMatchingPlatformConsideration" />

   <Consideration name="ReachReputationConsideration">
     <Variable type="NamedFloat" name="populationPercentage" value="0.85" />

   <Consideration name="BonusTuningConsideration">
     <Variable type="NamedFloat" name="bonusToSet" value="1.0" />

   <Consideration name="CommandPointsConsideration" />

   <Consideration name="NeighborCountConsideration">
     <Variable type="NamedInt" name="desirableCount" value="4" />

   <Consideration name="OpponentStaffCountConsideration" />

   <Consideration name="BribeBetterThanCampaignConsideration">
     <Variable type="NamedInt" name="minReputationGainDifference" value="1000" />
     <Variable type="NamedInt" name="rankIfMet" value="17" />
     <Variable type="NamedInt" name="bonusIfMet" value="10" />
     <Variable type="NamedFloat" name="multiplierIfMet" value="5" />

   <Consideration name="ScandalCountReachedConsideration">
     <Variable type="NamedInt" name="scandalCount" value="4" />
     <Variable type="NamedFloat" name="multiplierIfMet" value="0" />

 <Scorer taskId="RaiseFunds">
   <Consideration name="BlockadedDistrictConsideration" />
   <Consideration name="TerrorizedConsideration" />

   <Consideration name="ReachableConsideration">
     <Variable type="NamedFloat" name="multiplierIfUnreachable" value="0" />

   <Consideration name="HasSignificantFundsConsideration">
     <Variable type="NamedInt" name="preferredAmountToRaise" value="1000" />

   <Consideration name="BonusTuningConsideration">
     <Variable type="NamedFloat" name="bonusToSet" value="1.0" />

   <Consideration name="LowFundsConsideration">
     <Variable type="NamedFloat" name="fundsThreshold" value="0.3" />

   <Consideration name="CommandPointsConsideration" />
   <Consideration name="HigherReputationConsideration" />

   <Consideration name="NeighborCountConsideration">
     <Variable type="NamedInt" name="desirableCount" value="4" />

   <Consideration name="NearBailiwickConsideration" />

 <!-- ... Rest is ommitted -->

The mutation function then is just a matter of:

  • Add random consideration
  • Remove random consideration
  • Change some random variables

Crossover between two individuals is simply swapping a random number of considerations.

For the initial population, I used our existing AI as a starting point. I called it “Individual Zero”. The first population are individuals that are mutated versions of him. Now with all the pieces together, I have a GA that looks for a better AI.

First Runs

When I was about to finish, I was too excited to run it. I know it will take time so I plan to run the simulation overnight when I’m asleep. It was Saturday night. I’m sleepy and tired. I fixed some errors that would potentially halt the simulation. I set the settings for the fast mode. Every game lasts only 15 turns. The population size is set to only 10. Then finally I let the baby run. I watched the first few AI matches. After a while, nothing goes wrong, I locked the laptop and went to bed.

When I woke up, I checked it immediately. The simulation had hung. I looked at the Task Manager. 90% RAM usage. So I closed it. But lo and behold, it generated 10 AI configurations, each one better than the last one before it. So I was like “that went well”. I pushed my work and restarted my laptop. Then my laptop won’t start. It only said “Diagnosing Your PC”. But I wasn’t worried. I know I didn’t do anything stupid. The simulation just probably messed up the memory. An hour later, my laptop was alive again. I immediately fixed the memory leak. Lesson learned when writing a resource intensive GA like this one.

After coffee, I decided to play a game against the new AI and see if it’s really indeed better. Maybe it’s just better against other AI but not humans. So I used my usual strategy. Bribed supporters on the first few turns. I acquired districts as soon as I can. I was winning. Every 5 turns, the game shows this graph about how many supporters each candidate has. I beat his numbers every time. I even have more acquired districts than him. Am I really playing against a better AI? On the last turn, I’ve made my actions. I knew I was gonna win. I controlled more districts than him. That dread that I probably just wasted time returned again. Election came… the AI won an unbelievable landslide. Jaw dropped. Turns out he didn’t care about the voters and was befriending more patrons than me. Well, I guess I created a monster.

Finite State Machine vs Behaviour Tree, A True Story

I needed behaviour trees because I wanted to manage the complexity of the units AI in Warrior Defense which originally used finite state machines (FSM). FSMs are great because they are so simple and intuitive. When they get big, however, they become so complicated. It got to a point where I’m afraid to change the configuration of the FSM because the working AI easily breaks. In other words, it’s brittle. I don’t like that. I still have lots of features to add later on that deal with unit interactions.

A part of Defender unit FSM. Looks like spaghetti, yum!
Just a part of Defender unit FSM. It looks like spaghetti, yum!

I decided to make my own behaviour tree tool because I can’t find a decent one in the Asset Store. I’m one of those lazy guys that buy a lot of tools. I’ve tried RAIN, which is free. I hated it. The changes to the tree did not reflect in the game right away. It just frustrated me. I’ve bought React while it was on sale but I didn’t like how it worked. Adding actions to it was clunky. It uses functions and runs it in a coroutine. It doesn’t support parameters, as well.

Thus, the journey to developing my own behaviour tree tool began. I called it Banana Tree. I completed the core behaviour tree framework (nodes, sequence, decorators) some months ago. But this is pure code. There’s no visual editor. I’ve made AI behaviours that are hardcoded in classes. Production wise, this is not very efficient. AI behaviours changes a lot. Something that changes a lot should not be code, they should be data. I thought that I should probably make a data driven behaviour tree tool. Looking at the FSM above, I think I’ll be making a lot of code anyway using the current framework.

So I did make a data driven behaviour tree with its own visual editor. I just finished the bare essentials that could create a working behaviour out of data. I’ve successfully ported the complex behaviour above to its Banana Tree equivalent. The high level behaviour looks like this:

High level behaviour nodes
High level behaviour nodes. It looks very neat.

Don’t be fooled, though, because it looks like this underneath those parent nodes:

Worms under the stone
Worms under the stone

What I’d like to point out is that it’s still complicated, but it’s now more manageable. I can complete one branch of the behaviour and forget about it once it works and begin working on another one. My brain is not strained so much to the amount of visual objects because I can just fold the node and stop thinking about the things beneath it. It’s so unlike FSMs where you can’t help but see the states and their wiring.

Using behaviour trees feels more like programming. Each node feels like a function. Working with an FSM feels like wiring. It is static as “wire one event to one transition state”. I now understand what articles about behaviour trees mean when they say FSMs are constricting (I didn’t get it before). Behaviour trees don’t have wires. They execute on how the tree is structured. What to do next is not explicitly stated in the behaviour tree. It can select a lot of possible action branches to execute. In FSMs, you have to explicitly specify what state it should go next. This could be limiting because there may be a lot of possible states to go to. It’s possible to make it work like a behaviour tree but you’ll need more work and maintenance to the FSM. You’ll most probably end up with an FSM like the one I showed.

I can’t say that behaviour trees are better, either. They also have a major disadvantage: steep learning curve. You have to think like a compiler when working with BTs. You have to know what its elements mean like Sequence, Selector, Parallel, Decorator, etc. Like every skilled programmers say, “Know your tools. Know where it’s best used.” Behaviour trees has been proven to work on complex behaviours, but I probably can’t give this to a non programmer and let him/her figure it out on his/her own. FSMs are much more easier to understand. They are far more intuitive, too. FSMs are probably the way to go if you only need to model simple behaviours.