How to run jobs using ComponentDataFromEntity in parallel

ComponentDataFromEntity<T> is one of the most commonly used API in Unity’s Entities package (ECS). At least in our case. It’s basically like Dictionary<Entity, T> where T is an IComponentData. You can lookup any component as long as you have the entity owner. You can get a ComponentDataFromEntity<T> for any T in any sytem.

The most obvious use case is to treat it as a reference or pointer to a component owned by another entity like for example, parent-child relationships. We also use it to reduce archetype size such that more entities can be accommodated in a chunk. We do this by moving components that are heavy to another entity and then this is referenced by the original owner entity.

The main drawback is that once you use it in a job, you can’t run it in parallel as is. The Unity runtime won’t even allow you. It will just throw an error. You can still make it run in parallel but there are rules and I’m going to show you how in this post.

It is bad they say

You may run across a documentation recommending to not use ComponentDataFromEntity as much as you can or use it sparingly. It is correct… to some extent. Using it will make it possible to access data from different locations instead of accessing them contiguously, thus causing more cache misses.

However, this is not so bad when used in an actual game. A naively written Burst compiled job using ComponentDataFromEntity still runs significantly faster than its equivalent normal OOP code even when this job runs in a single thread. So don’t feel ashamed if you have to use it. You are going to in one way or another. If single threaded job is not good enough for you, this post may serve as a guide on how to make them multithreaded.

Helper System

I’m going to use this simple base system to help making systems with jobs easier:

public abstract class JobSystemBase : SystemBase {
    protected override void OnUpdate() {
        this.Dependency = OnUpdate(this.Dependency);
    }

    protected abstract JobHandle OnUpdate(JobHandle inputDeps);
}

What this does is instead of overriding OnUpdate(), you have to override the OnUpdate() version that accepts a dependency and returns a handle. This was the old way of making systems with jobs and I just like it better. I’m also going to write mostly struct jobs in this post. No Entities.ForEach() here. Struct jobs are just better in my opinion. They’re more flexible and more future proof. Entities.ForEach() is translated into a struct job in the background so struct jobs have more staying power as they are the fundamental building blocks. Entities.ForEach() is just magical sugar that can still change.

So if you plan to use Unity’s ECS, learn how to write struct jobs and use them more often (especially when dealing with ComponentDataFromEntity). Use Entities.ForEach() only for trivial cases.

What are we aiming for

Evey time I mention multithreading or running in parallel, I really only mean that ScheduleParallel() can be called on the job instead of just Schedule(). So ScheduleParallel() = multithreaded. That’s it at least in this article.

We are aiming for this:

Running in parallel

Not this:

Running in a single thread.

Here’s a sample system with a job using ComponentDataFromEntity that can’t be run in parallel:

// Look! This is using JobSystemBase *wink
public class CantRunInParallel : JobSystemBase {
    private EntityQuery query;
    
    protected override void OnCreate() {
        base.OnCreate();
        this.query = GetEntityQuery(typeof(Alter));
        
        // Create sample entities
        Entity target = this.EntityManager.CreateEntity(typeof(Translation));

        Entity alter = this.EntityManager.CreateEntity(typeof(Alter));
        this.EntityManager.SetComponentData(alter, new Alter(target));
    }

    protected override JobHandle OnUpdate(JobHandle inputDeps) {
        Job job = new Job() {
            alterType = GetComponentTypeHandle<Alter>(), 
            allTranslation = GetComponentDataFromEntity<Translation>()
        };

        // Unity will complain about this
        // Can't write to allTranslation in parallel
        return job.ScheduleParallel(this.query, 1, inputDeps);
    }

    private readonly struct Alter : IComponentData {
        public readonly Entity target;

        public Alter(Entity target) {
            this.target = target;
        }
    }
    
    private struct Job : IJobEntityBatch {
        public ComponentTypeHandle<Alter> alterType;

        public ComponentDataFromEntity<Translation> allTranslation;
        
        public void Execute(ArchetypeChunk batchInChunk, int batchIndex) {
            NativeArray<Alter> alters = batchInChunk.GetNativeArray(this.alterType);
            Random random = new Random();

            for (int i = 0; i < alters.Length; i++) {
                Alter alter = alters[i];

                // Write to allTranslation
                this.allTranslation[alter.target] = new Translation() {
                    Value = random.NextFloat3()
                };
            }
        }
    }
}

It’s a contrived example. An entity with Alter component has reference to a target which is another entity. The job here traverses all entities with Alter but uses its target to reference the Translation component from another entity and changes it to a random value.

If you copy this script to a project with Entities package and run even an empty scene, you will encounter this error:

InvalidOperationException: Job.JobData.allTranslation is not declared [ReadOnly] in a IJobParallelFor job. The container does not support parallel writing. Please use a more suitable container type.

What can we do?

ReadOnly

If you read the error message carefully, one of the hints is already there. You can run a job with CDFEs (ComponentDataFromEntity) in parallel if you set them as ReadOnly. This makes sense because there will be no race conditions as you’re just reading data. There’s no risk of reading outdated data. It’s safe to access the data in CDFE even from different concurrent threads. However, this only truly works on jobs where you just read CDFEs. When you set a CDFE as ReadOnly but then try to write in it, the Unity editor will punch you in the face and call you a liar.

Here’s an example. Say you have a model of Characters and their Abilities. Characters are entities and each Ability is also their own entity. Characters own a set of Abilities. Let’s say that there’s an ability that is only enabled when the owner Character is dead, like say “Revive”. The system of that ability might look like this:

public class EnableOrDisableReviveSystem : JobSystemBase {
    private EntityQuery query;

    protected override void OnCreate() {
        this.query = GetEntityQuery(typeof(Ability), typeof(Revive));
    }

    protected override JobHandle OnUpdate(JobHandle inputDeps) {
        Job job = new Job() {
            abilityType = GetComponentTypeHandle<Ability>(),
            reviveType = GetComponentTypeHandle<Revive>(),
            allHealth = GetComponentDataFromEntity<Health>()
        };
    
        return job.ScheduleParallel(this.query, 1, inputDeps);
    }
    
    [BurstCompile]
    private struct Job : IJobEntityBatch {
        public ComponentTypeHandle<Ability> abilityType;
        public ComponentTypeHandle<Revive> reviveType;

        [ReadOnly]
        public ComponentDataFromEntity<Health> allHealth;
    
        public void Execute(ArchetypeChunk batchInChunk, int batchIndex) {
            NativeArray<Ability> abilities = batchInChunk.GetNativeArray(this.abilityType);
            NativeArray<Revive> reviveList = batchInChunk.GetNativeArray(this.reviveType);

            for (int i = 0; i < batchInChunk.Count; ++i) {
                Ability ability = abilities[i];
                
                // Set enabled if dead. Disabled otherwise.
                Revive revive = reviveList[i];
                revive.enabled = this.allHealth[ability.ownerEntity].IsDead;
                
                // Modify
                reviveList[i] = revive;
            }
        }
    }
}

The thing to look here is the Job. See that allHealth has the ReadOnly attribute. We just need Health to see if the owner of the ability is dead. We don’t write anything to health. This job can be run in parallel as you can see in OnUpdate().

Write to CDFE but still in parallel

Read only CDFEs doesn’t do much. We need to write to it if we want to get somewhere. You can write to CDFEs just fine as long as the job runs in a single thread (just call Schedule()). A single threaded job that’s Burst compiled is really ok in most cases. There’s a way to run them in parallel but it comes with a caveat.

There’s really only one rule about CDFEs in terms of running in parallel and that is “you can’t write to CDFEs in a parallel job”. This is understandable because there’s no guarantee that you’ll get a consistent ordering of writing data from concurrent threads. Say you have concurrent threads A and B that writes to a component owned by entity X. Thread A may write first or B may write first in some cases. Or it could happen that thread A wrote to X and that entity is passed on to some more processing in the same thread. However, thread B then wrote to it prior to this processing in A because they run concurrently. So now, that processing in A is using data supplied by thread B. That will surely lead to bugs and it’s hard to track. So writing to CDFEs is just outright banned to avoid this.

However, the framework allows you to bypass this rule by adding the NativeDisableParallelForRestriction attribute to a CDFE. Why allow it? Because you, the user, know your data better. You can allow parallel writing if you’re absolutely sure that your multithreaded code only writes on unique entries. This means that your code will never write to a CDFE entry more than once thus avoiding the issue I’ve decsribed.

This comes up in a lot of cases. For example, AI actions. Say you model the actions as entities and they have the entity reference to their agent owner where they get data from and write data into. Suppose that only one AI action per agent runs at a time. Therefore, when you run the systems per action type, each action can only ever write to their agent owner. Since there’s only one executing action per agent, this guarantees that the job code will only write to components of unique agents.

Let’s see a simple example. Say we implement a heal ability that adds hit points to its referenced agent. Here’s how it could look:

public class HealSystem : JobSystemBase {
    private EntityQuery query;

    protected override void OnCreate() {
        this.query = GetEntityQuery(typeof(Ability), typeof(Heal));
    }

    protected override JobHandle OnUpdate(JobHandle inputDeps) {
        Job job = new Job() {
            abilityType = GetComponentTypeHandle<Ability>(),
            healType = GetComponentTypeHandle<Heal>(),
            allHealth = GetComponentDataFromEntity<Health>()
        };
        
        return job.ScheduleParallel(this.query, 1, inputDeps);
    }
    
    [BurstCompile]
    private struct Job : IJobEntityBatch {
        [ReadOnly]
        public ComponentTypeHandle<Ability> abilityType;
        
        [ReadOnly]
        public ComponentTypeHandle<Heal> healType;

        [NativeDisableParallelForRestriction]
        public ComponentDataFromEntity<Health> allHealth;
        
        public void Execute(ArchetypeChunk batchInChunk, int batchIndex) {
            NativeArray<Ability> abilities = batchInChunk.GetNativeArray(this.abilityType);
            NativeArray<Heal> healList = batchInChunk.GetNativeArray(this.healType);

            for (int i = 0; i < batchInChunk.Count; i++) {
                Ability ability = abilities[i];
                if (!ability.executing) {
                    // Not executing
                    continue;
                }

                Heal heal = healList[i];
                
                // Update health
                Health health = this.allHealth[ability.ownerEntity];
                health.hp += heal.healPoints;
                
                // Modify
                this.allHealth[ability.ownerEntity] = health;
            }
        }
    }
}

You can see here that Job.allHealth which the job modifies has the NativeDisableParallelForRestriction attribute and that the job is run with ScheduleParallel().

You’ve got to be careful when using NativeDisableParallelForRestriction because the framework won’t determine for you if your code really writes to unique entries. If your code does not, it will not give you an error letting you know that you’ve done something wrong. It could lead to a source of bugs. NativeDisableParallelForRestriction is really disabling a restriction meant for safety. Use at your own risk.

Writing to a list or set of values

Say you have a parent-children relationship of entities. The children entities have reference to their parent. The parent wants a list of values from its children like say calculating for aggregate values like total or average. How might we write jobs for this that run in parallel?

Let’s look at a sample scenario. Say a space ship and its different weapons. Let’s say the game wants to compute the total power of a ship’s weapons so we can display it as “ship power”. However, the computation of a weapon’s ship power is different per weapon type in different circumstances like say a weapon’s power becomes lower if it’s durability is low. In OOP, we might model it likes this:

public abstract class Weapon {
        public abstract int Power { get; }
    }

    public class Gun : Weapon {
        public override int Power {
            get {
                // Computation of power here
            }
        }
    }
    
    public class Missle : Weapon {
        public override int Power {
            get {
                // Computation of power here
            }
        }
    }
    
    public class Laser : Weapon {
        public override int Power {
            get {
                // Computation of power here
            }
        }
    }

    public class Ship {
        private readonly List<Weapon> weapons = new List<Weapon>();

        public int AttackPower {
            get {
                int total = 0;
                for (int i = 0; i < this.weapons.Count; i++) {
                    total += this.weapons[i].Power;
                }

                return total;
            }
        }
    }

You can see here that a ship owns a list of weapons and you can just loop through weapons to get the total power.

In DOTS, we might model the ship this way such that it also owns a list of weapons. You can use different lists that can be inside components. But let’s just say that we don’t do this because it increases the size of the archetype so much. We want to pack as many ship entities in a chunk because we’re making a space war game with thousands of ships. We may have to iterate on ship entities more frequently so we need the ship chunks to have more entities so that iterating is faster (less cache miss).

So we model weapons as their own entities. This way, they don’t take up chunk space for ships. The weapons themselves could also be elaborate so we have more modelling options if we implement them as entities. It might look like this:

public struct Weapon : IComponentData {
    public readonly Entity shipOwner;

    public Weapon(Entity shipOwner) {
        this.shipOwner = shipOwner;
    }
}

public struct Gun : IComponentData {
    // Gun data here
}

public struct Missile : IComponentData {
    // Missile data here
}

public struct Laser : IComponentData {
    // Laser data here
}

public struct Ship : IComponentData {
    public int attackPower;
}

// Creating a ship then looks like this
Entity ship = entityManager.CreateEntity(typeof(Ship));

// The ship's weapons
Entity gun = entityManager.CreateEntity(typeof(Weapon), typeof(Gun));
entityManager.SetComponentData(gun, new Weapon(ship));
entityManager.SetComponentData(gun, new Gun(/** gun parameters here **/));

Entity missile = entityManager.CreateEntity(typeof(Weapon), typeof(Missile));
entityManager.SetComponentData(missile, new Weapon(ship));
entityManager.SetComponentData(missile, new Missile(/** missile parameters here **/));

// ... Could have more weapons

The key here is to compute the attack power of each weapon type in its own system but how do we do that in parallel? We can’t just add to Ship.attackPower. We can’t run the following job:

private struct ComputeGunAttackPower : IJobEntityBatch {
    [ReadOnly]
    public ComponentTypeHandle<Weapon> weaponType;

    [ReadOnly]
    public ComponentTypeHandle<Gun> gunType;

    [NativeDisableParallelForRestriction]
    public ComponentDataFromEntity<Ship> allShips;

    public void Execute(ArchetypeChunk batchInChunk, int batchIndex) {
        NativeArray<Weapon> weapons = batchInChunk.GetNativeArray(this.weaponType);
        NativeArray<Gun> guns = batchInChunk.GetNativeArray(this.gunType);

        for (int i = 0; i < batchInChunk.Count; i++) {
            Weapon weapon = weapons[i];
            Gun gun = guns[i];
            
            // Add to ship's attackPower (bad!)
            Ship ship = this.allShips[weapon.shipOwner];
            ship.attackPower += ComputeAttackPower(gun);
            
            // Modify
            this.allShips[weapon.shipOwner] = ship;
        }
    }

    private static int ComputeAttackPower(in Gun gun) {
        // Special computation for the gun attack power here
        return X;
    }
}

This is clearly not writing to a CDFE on unique entries. This only works if a ship is only allowed a single gun. Let’s say that we allow multiple guns per ship because it’s stupid to bring a single gun in a massive space war. So multiple gun entities can now modify the attackPower variable to a common ship. This job can cause bugs. Bad!

Let me show you a pattern that I’ve frequently used in our code. I would add a buffer element to the ship entity that would hold the attack power values of a ship’s weapons. It would look like this:

// Let's just say that a ship has a maximum of 10 weapons
[InternalBufferCapacity(10)]
public readonly struct WeaponAttackPower : IBufferElementData {
    public readonly int value;

    public WeaponAttackPower(int value) {
        this.value = value;
    }
}

// Creating a ship will then look like this
Entity ship = entityManager.CreateEntity(typeof(Ship), typeof(WeaponAttackPower));

I would then assign an index to each Weapon so it knows which index from its parent buffer of WeaponAttackPower to write its attack power to. Now we don’t write to Ship.attackPower directly. We write to the buffer of WeaponAttackPower at an index. So we add this to the Weapon.

public struct Weapon : IComponentData {
    public readonly Entity shipOwner;
    public readonly int ownerAttackPowerIndex;

    public Weapon(Entity shipOwner, int ownerAttackPowerIndex) {
        this.shipOwner = shipOwner;
        this.ownerAttackPowerIndex = ownerAttackPowerIndex;
    }
}

// Creating a weapon will then look like this. Let's say 2 guns.
{
    Entity gun = entityManager.CreateEntity(typeof(Weapon), typeof(Gun));
    
    // Note here that we specify the index
    entityManager.SetComponentData(gun, new Weapon(ship, 0));
    entityManager.SetComponentData(gun, new Gun(/** Whatever **/));
}

{
    Entity gun = entityManager.CreateEntity(typeof(Weapon), typeof(Gun));
    
    // Note here that we specify the index
    entityManager.SetComponentData(gun, new Weapon(ship, 1));
    entityManager.SetComponentData(gun, new Gun(/** Whatever **/));
}

// Create space for 2 WeaponAttackPower for the weapons to write to
DynamicBuffer<WeaponAttackPower> attackPowerBuffer = this.EntityManager.GetBuffer<WeaponAttackPower>(ship);
attackPowerBuffer.Add(default);
attackPowerBuffer.Add(default); // You can of course use a for loop here

Once we have these set-up, we can create the systems for each weapon type that will write the attack power to the owner ship’s buffer of WeaponAttackPower. The following is one of such systems.

public class WriteGunAttackPowerSystem : JobSystemBase {
    private EntityQuery query;

    protected override void OnCreate() {
        this.query = GetEntityQuery(typeof(Weapon), typeof(Gun));
    }

    protected override JobHandle OnUpdate(JobHandle inputDeps) {
        Job job = new Job() {
            weaponType = GetComponentTypeHandle<Weapon>(),
            gunType = GetComponentTypeHandle<Gun>(),
            allAttackPowerBuffers = GetBufferFromEntity<WeaponAttackPower>()
        };
        
        return job.ScheduleParallel(this.query, 1, inputDeps);
    }
    
    [BurstCompile]
    private struct Job : IJobEntityBatch {
        [ReadOnly]
        public ComponentTypeHandle<Weapon> weaponType;
        
        [ReadOnly]
        public ComponentTypeHandle<Gun> gunType;

        [NativeDisableParallelForRestriction]
        public BufferFromEntity<WeaponAttackPower> allAttackPowerBuffers;
        
        public void Execute(ArchetypeChunk batchInChunk, int batchIndex) {
            NativeArray<Weapon> weapons = batchInChunk.GetNativeArray(this.weaponType);
            NativeArray<Gun> guns = batchInChunk.GetNativeArray(this.gunType);

            for (int i = 0; i < batchInChunk.Count; i++) {
                Weapon weapon = weapons[i];
                Gun gun = guns[i];

                int gunAttackPower = ComputeAttackPower(gun);
                
                // Note here that we write on the index assigned to the weapon
                DynamicBuffer<WeaponAttackPower> attackPowerBuffer = this.allAttackPowerBuffers[weapon.shipOwner];
                attackPowerBuffer[weapon.ownerAttackPowerIndex] = new WeaponAttackPower(gunAttackPower);
            }
        }
        
        private static int ComputeAttackPower(in Gun gun) {
            // Special computation for gun attack power here
        }
    }
}

So how can this job run in parallel? The key here is we are avoiding a race condition by writing to a unique slot of the parent buffer. BufferFromEntity<T> is like CDFE but for DynamicBuffers. It’s like Dictionary<Entity, DynamicBuffer<T>>. You can use BFE to access DynamicBuffers associated with their entities. In the job here, we get the DynamicBuffer<WeaponAttackPower> associated to the ship owner. Then we write to the index assigned in the weapon which upon creation was assigned a unique index.

Then finally, we create the job that will iterate through ships that then computes the total attack power of the ones stored in WeaponAttackPower buffer.

public class ComputeShipAttackPowerSystem : JobSystemBase {
    private EntityQuery query;

    protected override void OnCreate() {
        this.query = GetEntityQuery(typeof(Ship), typeof(WeaponAttackPower));
    }

    protected override JobHandle OnUpdate(JobHandle inputDeps) {
        Job job = new Job() {
            shipType = GetComponentTypeHandle<Ship>(),
            attackPowerType = GetBufferTypeHandle<WeaponAttackPower>()
        };
        
        return job.ScheduleParallel(this.query, 1, inputDeps);
    }
    
    [BurstCompile]
    private struct Job : IJobEntityBatch {
        public ComponentTypeHandle<Ship> shipType;
        
        [ReadOnly]
        public BufferTypeHandle<WeaponAttackPower> attackPowerType;
        
        public void Execute(ArchetypeChunk batchInChunk, int batchIndex) {
            NativeArray<Ship> ships = batchInChunk.GetNativeArray(this.shipType);
            BufferAccessor<WeaponAttackPower> attackPowerBuffers = batchInChunk.GetBufferAccessor(this.attackPowerType);

            for (int i = 0; i < batchInChunk.Count; ++i) {
                DynamicBuffer<WeaponAttackPower> attackPowerBuffer = attackPowerBuffers[i];
                
                // We compute and assign the total here
                Ship ship = ships[i];
                ship.attackPower = GetTotal(attackPowerBuffer);
                
                // Modify
                ships[i] = ship;
            }
        }

        private static int GetTotal(in DynamicBuffer<WeaponAttackPower> attackPowerBuffer) {
            int total = 0;
            for (int i = 0; i < attackPowerBuffer.Length; ++i) {
                total += attackPowerBuffer[i].value;
            }
            
            return total;
        }
    }
}

With these, we have implemented the computation of each weapon attack power in parallel then compute the total that also runs in parallel. We are closer to simulating that space war with thousands of ships!

While this is just a simple example, we have definitely used this pattern to convert some of our frameworks to DOTS. One of these is the Utility AI system described in the book Behavioral Mathematics for Game AI. Basically, you have a Selector which has a list of Options, and each Option has a list of Considerations. In our framework, we implemented the Selector, Option, and Consideration to their own entities. Each consideration is computed differently. We implement one system for each unique consideration computation. The result of these computations are written into a DynamicBuffer owned by their parent Option.

Next, we run a system that consolidates the consideration values that were written in the option’s buffer. This consolidated value is then the score of the option that is written to another DynamicBuffer of values owned by its Selector. From here, each selector has now a list of scored options. It’s up to the client coder on what to do with this. He/she can get the best option or get a random option weighted by its score. All the systems that we’ve made for this framework are multithreaded.

Conclusion

Using ComponentDataFromEntity and BufferFromEntity are not so bad. In fact they are essential tools to help you model your game data in ECS. I have developed implementation patterns through them that allowed me to convert our existing frameworks into ECS and have them run in Burst compiled multithreaded jobs. This results in really fast computations that can easily handle the max number of NPCs for our next game. I’m so excited! That’s all I have for now. Until next time.

If you like my posts, please subscribe to my mailing list.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s