Overview
This year I began work on another game that we eventually
named Modulo Attack. Modulo Attack is a top down 3D dungeon crawler where the
players co-operatively fight rooms full of enemies and grab different ability
modules to combat the enemy. This will be an analysis of the AI systems that
went into the game and discoveries made while working with the team.
All of the Systems!
Modulo Attack at its inception had a wide variety of
features that were planned for the game. Those features were as follows: AI
audio director, enemy and Boss AI, bot AI, and logged bot AI. Systems that were
written to support these features were: Behavior Trees, a utility system, audio
planner (written like the Last of Us and valves dynamic dialogue systems),
basic sensor systems, path finding, N-grams, logged data generation, and a type
of black board. It included a lot of systems that I had once again never
written before, but this time the behavior code was going to be abstracted to
scripting as well. So a new discovery I ran into was making my architecture run
well with scripts. This was most definitely the most I have had to write for
one game, but I once again went to work and learned a lot on the way.
The Cuts
Unfortunately the game had run into a series of feature cuts
do to numerous things. The first thing to get cut was the Audio Director.
Although I had written a system that successfully read in audio events we
simply couldn’t support the amount of content to effectively utilize the
system. The system as a whole worked by reading in a custom file that would
specify a context and variables that would trigger that context, and then
during the game it would look up the variables to see if the context was triggered.
The system also had a managed sound queue to determine what was a priority
sound labeled by the designers and could move sounds ahead in the queue so the
sounds wouldn’t play on top of each or they could interrupt a less important
sound if needed. The system was functional, but do to the mentioned content
restraints we could not support it.
Second on the chopping block was the logged data bots. This was started as a research project to log player movement and behavior to generate new movement and likely behaviors of a human player to reload in the game as personalized bots. After getting the movement working it was taking a lot of time however. Also at the time of inception the level was a big area and not individual rooms and so the system didn’t fit with the new room design as the encounters where to chaotic to justify trying to model the information. The modeling of behaviors could still be applied but normal bots needed to be completed before moving on to this aspect. The movement logging was a big task so I will go into detail about it here. The system was set up to record a session of movement data which tracked the orientation, speed, position, delta, and velocity of the player. Where ever the player moved it was recorded and outputted to a file. Later the data was stitched together through sessions and combined to make new movement data for the AI to playback for the game. What it could then do is travel between sessions to create contextual movement paths that the player used. The AI could branch between paths and since the session path may branch from a different session it could make new movement paths. It also used a N-gram model to break out of loops in the movement in case the AI got stuck in endless cycle. Now since this didn’t make it into the game I will explain where this could be very useful because the context it was there for no longer applied to our game. This movement model is best used as a movement model for a bot in a multiplayer game. Particularly when moving about the map before an encounter.
Lastly the bots themselves were cut due to technical concerns and meeting the deadline. Roughly a few days before the deadline the AI bots started coming together, they had black board support to perform specialized tasks like reviving players and drawing agro toward themselves so the player could flank the targets, they could find and pick up modules in the level and when it was appropriate, they could dynamically switch abilities in combat to use the best weapon at the time using a utility system, and they would follow the player around until the bot could attack other enemies. The problem was we had a lot of environments that housed lots of dynamic and static traps. After just getting some level information in 2 days before we had to ship for the path finding to work there were still some bugs appearing and it would get stuck occasionally and couldn’t follow the player without getting hit by the traps and going down. So to avoid annoyance to the player and with just the lack of time to polish and make the AI look great I removed the bot AI. The end result of the bots in the following 2 days would have displayed poor looking AI and would be a hindrance to the player so it was best to remove them from the game due to time constraints.
Second on the chopping block was the logged data bots. This was started as a research project to log player movement and behavior to generate new movement and likely behaviors of a human player to reload in the game as personalized bots. After getting the movement working it was taking a lot of time however. Also at the time of inception the level was a big area and not individual rooms and so the system didn’t fit with the new room design as the encounters where to chaotic to justify trying to model the information. The modeling of behaviors could still be applied but normal bots needed to be completed before moving on to this aspect. The movement logging was a big task so I will go into detail about it here. The system was set up to record a session of movement data which tracked the orientation, speed, position, delta, and velocity of the player. Where ever the player moved it was recorded and outputted to a file. Later the data was stitched together through sessions and combined to make new movement data for the AI to playback for the game. What it could then do is travel between sessions to create contextual movement paths that the player used. The AI could branch between paths and since the session path may branch from a different session it could make new movement paths. It also used a N-gram model to break out of loops in the movement in case the AI got stuck in endless cycle. Now since this didn’t make it into the game I will explain where this could be very useful because the context it was there for no longer applied to our game. This movement model is best used as a movement model for a bot in a multiplayer game. Particularly when moving about the map before an encounter.
Lastly the bots themselves were cut due to technical concerns and meeting the deadline. Roughly a few days before the deadline the AI bots started coming together, they had black board support to perform specialized tasks like reviving players and drawing agro toward themselves so the player could flank the targets, they could find and pick up modules in the level and when it was appropriate, they could dynamically switch abilities in combat to use the best weapon at the time using a utility system, and they would follow the player around until the bot could attack other enemies. The problem was we had a lot of environments that housed lots of dynamic and static traps. After just getting some level information in 2 days before we had to ship for the path finding to work there were still some bugs appearing and it would get stuck occasionally and couldn’t follow the player without getting hit by the traps and going down. So to avoid annoyance to the player and with just the lack of time to polish and make the AI look great I removed the bot AI. The end result of the bots in the following 2 days would have displayed poor looking AI and would be a hindrance to the player so it was best to remove them from the game due to time constraints.
Enemy AI or Bust
Since the enemy AI was pretty much the only thing that
successfully made it into the game I made sure we had a very polished feature
set for the designers to use for the game. A Feature set I gave to the designers
in our game was a behavior tree with utility features, shared AI scripts for
behaviors and features to support the scripts they needed. Some of these things
were a agro meter for the enemies, targeting scripts to utilities utility,
avoidance feelers, controlled firing, and
just overall doing best to help the designers understand the system to utilities
it to its full potential. Overall I think the enemies turn out to what the
designers wanted and look good. Lots of time went into getting the designers
well acquainted with the system so they could do what they wanted with it.
Building Behaviors
The Behavior tree was where the AI really worked out well
The architecture was set up to run the tree once until it failed or a reaction
time expired and saved whatever the active behaviors were. Since the AI was
scripted with lua we wrote the AI to take in lua files and the lua files would
run the scripts. This allowed scripts to be written once and then just reuse
the script to add the behavior into other trees. A specialized targeting node
was created to run through all of the targets the AI had access to and culled
out whatever invalid targets were around. All that needed to be passed in was a
targeting script that scored the targets on designer specified targeting
conditions with utility. You could also swap out the targeting script whenever
you wanted so the AI had to option to change targeting methods with just a
script swap. Since lua had no access to our C++ variables we also had a behavior
factory that allowed the designers to create nodes and set them in the tree
without having direct access to the classes which worked out nicely.
Communication
Communication was a big thing that went wrong with our game.
There were a lot of things that were miscommunicated, or not communicated at
all. This was on both myself and the other party, but was a learning
experience. Some examples are that I had already written a big AI system before
I introduced it to the designers and it came off as overwhelming and too much
to handle most likely. On the other side of things I didn’t find out there were
no designers that openly were interested on working with the AI systems at all
until way to late in the game. These were problems that if either of us had just
asked about it to the other party we could have solved it then and there. One thing
I learned from it on my side is to introduce things in small digestible chunks,
and to make sure that people are using or want to use the systems in place. But
I feel communication overall from myself and the team could have been much
better and invaluable.
Architecture Changes
Changes to my architecture I would do is split the AI data
component up more. Because it became a
Goliath of a file and a lot stuff was put in there. The AI data component housed
all steering variables, sensor data, black board info, and generic game data
the AI used. If I had properly split it up into those files that would have
been organized better.
Incorporating of utility systems into the nodes of the
behavior tree were unnecessary because no one used it and it was easier and
better to just use priority ordering for the trees so that would have made it
less confusing from the lua interface
Conclusion
Modulo Attack was a game of many tough spots for the AI system because many things were cut out. However, a lot was learned from it and the enemy AI and overall architecture was a success. The downfalls of the AI systems were based on evolving design, communication trouble, and low time. Debugging info also took a hit as well, and could have been better too. Overall Modulo Attack turned out to be a great product and the AI was able to accomplish what was needed.If you have any questions about any of the topics I talked about or just curious to know anything I didn’t explain well enough feel free to e-mail me at justinmaio01@gmail.com. Thanks and I hope you find this useful!