Thursday, April 24, 2014

AI Post Mortem (Game: Modulo Attack)


Overview

This year I began work on another game that we eventually named Modulo Attack. Modulo Attack is a top down 3D dungeon crawler where the players co-operatively fight rooms full of enemies and grab different ability modules to combat the enemy. This will be an analysis of the AI systems that went into the game and discoveries made while working with the team.

All of the Systems!


Modulo Attack at its inception had a wide variety of features that were planned for the game. Those features were as follows: AI audio director, enemy and Boss AI, bot AI, and logged bot AI. Systems that were written to support these features were: Behavior Trees, a utility system, audio planner (written like the Last of Us and valves dynamic dialogue systems), basic sensor systems, path finding, N-grams, logged data generation, and a type of black board. It included a lot of systems that I had once again never written before, but this time the behavior code was going to be abstracted to scripting as well. So a new discovery I ran into was making my architecture run well with scripts. This was most definitely the most I have had to write for one game, but I once again went to work and learned a lot on the way.

The Cuts

Unfortunately the game had run into a series of feature cuts do to numerous things. The first thing to get cut was the Audio Director. Although I had written a system that successfully read in audio events we simply couldn’t support the amount of content to effectively utilize the system. The system as a whole worked by reading in a custom file that would specify a context and variables that would trigger that context, and then during the game it would look up the variables to see if the context was triggered. The system also had a managed sound queue to determine what was a priority sound labeled by the designers and could move sounds ahead in the queue so the sounds wouldn’t play on top of each or they could interrupt a less important sound if needed. The system was functional, but do to the mentioned content restraints we could not support it.

Second on the chopping block was the logged data bots. This was started as a research project to log player movement and behavior to generate new movement and likely behaviors of a human player to reload in the game as personalized bots. After getting the movement working it was taking a lot of time however. Also at the time of inception the level was a big area and not individual rooms and so the system didn’t fit with the new room design as the encounters where to chaotic to justify trying to model the information. The modeling of behaviors could still be applied but normal bots needed to be completed before moving on to this aspect. The movement logging was a big task so I will go into detail about it here. The system was set up to record a session of movement data which tracked the orientation, speed, position, delta, and velocity of the player. Where ever the player moved it was recorded and outputted to a file. Later the data was stitched together through sessions and combined to make new movement data for the AI to playback for the game. What it could then do is travel between sessions to create contextual movement paths that the player used.  The AI could branch between paths and since the session path may branch from a different session it could make new movement paths. It also used a N-gram model to break out of loops in the movement in case the AI got stuck in endless cycle. Now since this didn’t make it into the game I will explain where this could be very useful because the context it was there for no longer applied to our game. This movement model is best used as a movement model for a bot in a multiplayer game. Particularly when moving about the map before an encounter.

Lastly the bots themselves were cut due to technical concerns and meeting the deadline. Roughly a few days before the deadline the AI bots started coming together, they had black board support to perform specialized tasks like reviving players and drawing agro toward themselves so the player could flank the targets, they could find and pick up modules in the level and when it was appropriate, they could dynamically switch abilities in combat to use the best weapon at the time using a utility system, and they would follow the player around until the bot could attack other enemies. The problem was we had a lot of environments that housed lots of dynamic and static traps. After just getting some level information in 2 days before we had to ship for the path finding to work there were still some bugs appearing and it would get stuck occasionally and couldn’t follow the player without getting hit by the traps and going down. So to avoid annoyance to the player and with just the lack of time to polish and make the AI look great I removed the bot AI. The end result of the bots in the following 2 days would have displayed poor looking AI and would be a hindrance to the player so it was best to remove them from the game due to time constraints.

Enemy AI or Bust

Since the enemy AI was pretty much the only thing that successfully made it into the game I made sure we had a very polished feature set for the designers to use for the game. A Feature set I gave to the designers in our game was a behavior tree with utility features, shared AI scripts for behaviors and features to support the scripts they needed. Some of these things were a agro meter for the enemies, targeting scripts to utilities utility, avoidance feelers,  controlled firing, and just overall doing best to help the designers understand the system to utilities it to its full potential. Overall I think the enemies turn out to what the designers wanted and look good. Lots of time went into getting the designers well acquainted with the system so they could do what they wanted with it.

Building Behaviors

The Behavior tree was where the AI really worked out well The architecture was set up to run the tree once until it failed or a reaction time expired and saved whatever the active behaviors were. Since the AI was scripted with lua we wrote the AI to take in lua files and the lua files would run the scripts. This allowed scripts to be written once and then just reuse the script to add the behavior into other trees. A specialized targeting node was created to run through all of the targets the AI had access to and culled out whatever invalid targets were around. All that needed to be passed in was a targeting script that scored the targets on designer specified targeting conditions with utility. You could also swap out the targeting script whenever you wanted so the AI had to option to change targeting methods with just a script swap. Since lua had no access to our C++ variables we also had a behavior factory that allowed the designers to create nodes and set them in the tree without having direct access to the classes which worked out nicely.

Communication

Communication was a big thing that went wrong with our game. There were a lot of things that were miscommunicated, or not communicated at all. This was on both myself and the other party, but was a learning experience. Some examples are that I had already written a big AI system before I introduced it to the designers and it came off as overwhelming and too much to handle most likely. On the other side of things I didn’t find out there were no designers that openly were interested on working with the AI systems at all until way to late in the game. These were problems that if either of us had just asked about it to the other party we could have solved it then and there. One thing I learned from it on my side is to introduce things in small digestible chunks, and to make sure that people are using or want to use the systems in place. But I feel communication overall from myself and the team could have been much better and invaluable.

Architecture Changes

Changes to my architecture I would do is split the AI data component up more.  Because it became a Goliath of a file and a lot stuff was put in there. The AI data component housed all steering variables, sensor data, black board info, and generic game data the AI used. If I had properly split it up into those files that would have been organized better.
Incorporating of utility systems into the nodes of the behavior tree were unnecessary because no one used it and it was easier and better to just use priority ordering for the trees so that would have made it less confusing from the lua interface


Conclusion

Modulo Attack was a game of many tough spots for the AI system because many things were cut out. However, a lot was learned from it and the enemy AI and overall architecture was a success. The downfalls of the AI systems were based on evolving design, communication trouble, and low time. Debugging info also took a hit as well, and could have been better too. Overall Modulo Attack turned out to be a great product and the AI was able to accomplish what was needed.

If you have any questions about any of the topics I talked about or just curious to know anything I didn’t explain well enough feel free to e-mail me at justinmaio01@gmail.com. Thanks and I hope you find this useful!

Sunday, June 30, 2013

Behavior Trees and HTN Planners

Overview

So it has been a while since my last post, but since then I have been busy learning lots of details about behavior trees, HTN planning (or Hierarchical Task Network), and coming up with my own architecture modeled after these systems. Since then I have implemented behavior trees a few times and done extensive research on HTN planning based mostly off of Dana Nau's SHOP HTN techniques, so if you more interested and didn't find all the information you are looking for here go check out SHOP or SHOP 2 there are articles and pdfs around if you google it. Everything I will be explaining is about behavior trees and HTNs at the moment I will write about my new architecture when I finish it as my research project because I will have a better understanding of it's successes or failures by then.

The Basics

To start off a basic understanding of behavior trees is needed to understand both behavior trees (obviously...) but also HTN's as well because they are very similar. So assuming you know what a tree is there are 3 types of nodes required to get a behavior tree up and running which are Composite (or I call them action nodes I'll explain why in a minute), sequence, and selector nodes. These are the 3 most commonly used nodes and they are really all you need, but there are things like decorators, parallel, or other variants of the any of the above nodes that are just added specializations to the basic tree. I like to call composite nodes action nodes because they typically hold the logic for performing the behaviors in the tree thus performing action and makes the name more explicit.

Action Nodes (Composites)

These nodes will be what holds the logic for performing some action for your agent the basic functionality of it is to have an update to check your logic and perform anything related to that action, and tell you if it has succeeded or failed. That is really it nothing too fancy just an update function and an example of this might be a find target action all it would do is check if target is in range, and if the agent can see the target and if it can then you have a success, if not then it failed.

Selector Nodes

Next is the selector node this node helps determine how to traverse down the tree to find the behavior you are looking for. If you making your tree correctly most of you actions will be toward the bottom of the tree and the selectors will be there to determine which set of leaf nodes you will be updating for your current agent. An example might be at certain health intervals do one thing over the other like if the agent has less than 50%  health. The selector node will consult a set of actions else consult a different set of actions. In figure 1 there is a basic notation and showing possible result of 1, 2, and 3 for some condition (there will be an example tree later). Another thing to be aware of is when you are creating your selector nodes is the order of your options is important. Actions with a higher priority should be placed first because if that action is the most preferred action you don't want them to do something you don't want them to.
For example your agent has low health his options are suicide rush target, cower in fear, or go heal and lets say you order that in that order. It will check suicide rush first and if that succeeds then the agent will do that, but if there was a health pack the agent could have used it would never make it to the heal option if he can always suicide rush the target. Now you can add functionality to prioritize the selector with a values for each option, but really you doing the same thing as if you had just organized it from the get go and it saves a bit of computation time (just a bit).


 Sequence Nodes

Lastly is the sequence node. This node controls the execution of the actions. The core principle is that each action executes after the one before it succeeds. If it fails then the whole sequence fails. There are 2 notations commonly used for this one displayed in figure 2 the other in figure 3. Figure 2 is the notation is more commonly found in HTN planning and I find more appealing because it covers all of the nodes that are in the sequence and is more explicit especially if you end up having a large tree you could lose track of what nodes are in the sequence. 




Any way an example of a sequence node for some thing is having the agent heal itself.An example of how the actions might look would be go get health pack, equip health pack, and then use health pack (don't worry later I will have an example of something in detail for all of this soon I promise). If the agent has a health pack it succeeds, and it can move on to equip it, and then heal itself in that order. If not maybe the agent does something else or has to go find a health pack. 


An example of what a tree might look like with all of these things is displayed here.



Behavior Trees (In detail)

Specialized Nodes (the others)

I mentioned above about other types of nodes that you may want or need depending on the demands of your game, so here are some examples of nodes you might want they aren't all needed and you should only add them as you need them. You may also find that you need something I don't mention and if that is the case by all means add it in yourself :). Decorators are a very vague type of node because it typically modifies the execution of another. This can be anything from adding a cooldown timer to an action before it can be used again, limiting the amount of updates that can run on a particular action, or maybe you just want to invert the logic. These are there to modify/decorate existing nodes, and it can be done however you see fit.


Parallel nodes are ve
ry similar to sequence nodes, but instead of waiting for an action to succeed or fail it simply executes all of the actions in that set at the same time. I personally haven't found a good use for this node just because I feel like it takes away some of the structure of how actions execute, but if you have a reason and need it now you know about it.

A node I have heard requested by me is a random selector node. This essentially ignores any priority set up by  a standard selector node and randomly picks one of the selector nodes children at random.

Detailed Tree Example


I made a quick example of how you might organize a standard guard in a game it's very simple and gets the point across you probably want something a little more detailed and explicit if you were to put this into a game but it's a good game example for explaining and actually relates to a game scenario and not something silly like opening a door.





 EDIT =>So  I made a mistake. All of the preconditions such as "has cover" or "seen enemy" is a precondition check for a sequence So I want to clarify what that means because I think it maybe have come off as misleading. What that means is that it's the first node in the sequence the selector should contain no logic on determining what branch to go down other than using a type of priority selection if a precondition of a sequence fails then the tree will move to the other branch. A redone example of how this works is that for the attack branch is shown below.


As you can see the first check if to see if you have a target or not and if there is a target it is a much higher priority than doing nothing, so combat takes precedence over simple patrolling.

The next check is between types of weapons and self preservation. Self preservation is typically top priority because if the agent dies then it is game over for the agent literally so that check is done first. A secondary check takes into account for cover if the agent has available cover he will go to cover then heal if he finds cover. If there is no cover available then the agent will just heal at the current location.

Next is ranged combat this agent prefers ranged attacks, it's first (if you had a melee character you may want to switch these 2 or just get rid of it all together). It's nearly the same as healing, but instead if the agent can't find cover he will just crouch and shoot.


I'm sure at this point you can get the picture of how it works one of the toughest things I noticed is getting people to understand this process of how trees should be created. Once you figure it out you are in good shape and can start building great AI behaviors:) 




HTN Planning

First things first this section might be a lot shorter than you think. Why You might ask? Well I just told you about 90% of the info about HTN Planners because the difference between HTN's and Behaviors are very minimal. World state is the key difference between the two, typically an HTN will consult the world state when going through it's tree. The other difference is having preconditions and effects for the actions which actually fulfills the planner portion of the HTN.

World State

World state is basically a representation of the game world it can contain things like the amount of enemies left, interesting objects in the world, maybe certain structures or buildings, or more commonly if certain events take place, like an alarm going off or the lights going out. In a HTN planner the agents will look at this information and change their behavior based on what happened. For example perhaps the lights do go out in a map the world state has changed this may require the agent to go over a light switch and turn the lights back on. Once that agent turns the lights back on now the world state changed again because the room has light in it once again. What this eventually boils down to is a behavior tree, but now instead of just consulting the agents own internal information it now also looks at the state of the world.


Preconditions and effects

This is common syntax that you might find when doing any work involving planners at all, but the concept is simple every action has a certain amount of preconditions that need to be true before it can take effect or update the action. In turn the result of completing the action will then have some kind of effect on the world state. This is actually similar to selector nodes except it can consult multiple variables for one action, it effects the world state, and if your clever Like Jeff Orkin you can A* search through actions. Keep in mind however actions don't have to modify the world state it's typically more common for them to but Killzone 2 & 3 wrote an HTN and it never modified the world state!

Planning

Since this is similar to a behavior tree you might be asking are the created the same how do you build a HTN Planner. Ironic it's very similar you still define what sequence of actions you make and build your behaviors based on how you did them for a behavior tree except now you have a sub planner running the system. Let's go back to our example of healing. We will just be looking at the action for healing and not the whole tree but you can consult the detailed tree above to see where this fits in.

So to first clarify the paths are all potential options the planner can take it could try to do these in any order really, but if you A* it properly these are some outcomes you might see.


The point of this example is to show that the planner knows about a bunch of actions and that those actions can result in many different things depend on information available to the agent. For example in one of the paths the agent is unable to find a health pack because the precondition wasn't met in the find health option. This then lead to the agent cowering in fear instead of finding health.


Like I mentioned before this is just a portion from the behavior tree I created above in an HTN you may have this for shoot, melee, combat, or even going to cover, so this keeps the structure of the tree but the freedom of a planner to determine small sets of actions for smaller behaviors.

Conclusion

These architectures are really great at managing behaviors and are very common currently in the industry so if you are looking for an AI job it would be very beneficial to know this information. They keep the AI agent well structure and allow designers to have control and influence of the AI, but it also give the AI more flexibility and freedom to carefully plan out actions. Hopefully this has enlightened your knowledge of behavior tree and HTN planners, or taught you something entirely new if you didn't know any of this information before.

As always if you have any feedback, questions, or comments feel free to post a comment or e-mail me at justinmaio01@gmail.com. Thanks for reading :D



Saturday, April 20, 2013

Utility Systems and Game AI

Overview

So over the past month or so I have been talking with several people at Digipen about AI and I started to get a lot of blank stares when I brought up the term utility. I would mention that I used it in my game and the response I would get was "Utility? what's that? How does that work?" So I figured if not a lot of people know about it maybe I should write about it to help educated and interest people about a potential AI system that can be used in their games.

I will talk about a general overview of utility, the basics behind utility, and how it's used and how it works.

Utility

So utility has been used in games before and isn't anything super new, which was surprising when I found out not a lot of people had heard of it, unless we are talking about Dave Mark's infinite axis utility system which I found out at this years GDC (2013) then it might be a little newer, but we will talk about this later. Some games that use utility are The Sims 3, Section 8, and the new XCOM: Enemy Unknown game that exclusively used this for their game AI, so it's a very practical system to use in your game. Also I mentioned in my last post but there is a whole book about it called Behavioral Mathematics For Game AI by Dave Mark where I learned the most about this system as well as some lectures at GDC.


The Basics of Utility

Core Concept

The core concept behind utility is to take an arbitrary action and rate it using arbitrary values. What to you mean by this you might be asking?

Well let me give you an example, suppose that you are very hungry and you are deciding to order food or cook it yourself. On one hand you have  higher quality food (provided you are a good cook :) ) but it takes time and work that you might need, and on the other hand you can sacrifice said quality to order food and saves you some work. So how do you choose which one and based on what principals? Well maybe it depends on how hungry you really are, how much time you have, or
how much work you are willing to put into making food. You then would combine all of these factors together all weighted differently and make a decision based on how low or high you felt in each of these categories. If you weren't very hungry, didn't might taking some time out of the day, and have plenty of time then you will probably cook food yourself, but on the other hand let's say those were all the same but this time maybe you don't have any time you might weight your time more valuable then the others and now maybe you don't eat or you order food. This is the idea behind utility is the scoring of these actions based on other factors in a real life example how about we look at how it could be applied in games.




Suppose you are making a Shooter or Adventure game where you have enemies choosing targets or taking cover. You will most likely have a list of potential targets or cover points to look through. For our example we will use target selection. Some things you might take into account are the target's current health, the threat level of the target, how many allies you have, how many allies the enemy has, and how far away the target is. You then search through all of the potential targets and score them based on these factors and how much they mean to your AI. For example maybe you have an AI that targets stronger enemies for you, then you might score higher values for each attribute so you would want the closest enemy with the highest threat level the most allies and the most health, and in this case your allies wouldn't need to be factored in. Each target would get rated and pushed into some container and then chosen based on which target had the highest score. Now you are probably saying now well this all makes sense in theory, but how do you actually score these attributes?

Because actions are scored based on many different variables it can lead to interesting results and can often lead to emergent behaviors. Make sure when you do this you have a method to debug and check the scores of each attribute where if something goes wrong you can see what attributes are skewing your results if the emergent behaviors are negatively affecting your game.



How Utility Works

Scoring Actions

Scoring these actions is easier then you might think and for my game Nexus it boiled down to four simple functions to actually score things. The idea behind scoring each attribute is to use graphs. The 4 graphs I've seen used are Quadratic, Linear, Logistic (or sigmoid), and Logit.


You may be unfamiliar with the last two so I'll explain them a bit. The Logistic graph is a S-shaped graph represented with the function  y = 1 / (1 + e^-x) also know as the sigmoid graph because of it's S-shaped curve. the special thing about this graph is it's natural asymptotes approach 0 and 1 in the y-axis and x increases which then yields a normalized result. The Logit is the same but the graph is rotated and the the range is in the x-axis and y increases and is represented using the function y = log( x / 1 -x). (I linked the graphs so if you want a better idea of what these graphs look like just click them).


The graphs are a crucial part of utility system in how it works and the most important part of it is keeping the values normalized. Finding the un-normalized or natural bounds of each attribute is up to you and your designers, for example distance can be different with no clear max value like health might have. In this case you must create a cut off value where any value greater then said max value is treated the same. This allows you to create a nice cut off point and make sure all of your values are normalized which should be done before you graph the function.


Another key component is making sure you understand how your graphs work and how you can manipulate the graphs by scaling, rotating, and translating them. This will allow you better control over the values, and since you want your values to be between (0,1) (which I will explain shortly) ideally you should experiment to see how your graph looks for normalized values. For example if you want the lowest score to yield 0.5 make sure you can transform your graph to represent that result.


You are probably wondering why do I care about keeping things normalized? Why not just graph them and combine the scores at the end without normalizing? Keeping the values normalized keeps each action in a standardized system i.e. between [0, 1]. This comes into to play also when you go to combine variables. You combine the variable you simply multiply each variable together and since everything is between [0, 1] you will only get results between [0, 1]. After you score and combine all of these factors together you can then store that value into the action as a score value and if you want you can even weight the actions by multiplying the score at the end. This will un-normalize the score but it's ok because the final score has been determined and this is used to prioritize the actions. For example you might weight attacking a target a higher priority then running away. These action will be sorted by score now and the action with the highest score is the action your AI should pick because it's the most appealing to your AI :)

Infinite Axis Utility System

 This was an idea that Dave Mark talked about during GDC to make the utility system more of a stand alone system. It follow many of the ideas mentioned above and I kinda explained utility in an Infinite Axis system to begin with, but I'll elaborate it a little more. The idea is that any action can have an infinite amount of axis (or variables or as I think of them sometimes as attributes/variables the action takes into account when scoring). My impression of this system and may not be the intention of the creator is it allows designers freedom to add whatever they need to use to tweak the AI. For example as the AI programmer I would create the set of actions that designer can chose from or that is needed for the game. Then functionally is then created so that they can register new axis or variables to the action and the appropriate graphs for each axis. Variables can be added or removed whenever needed and tweaked on the fly and the actions follow the same idea as before so they are selected by score based on whatever axis were created. 

Conclusion

Utility systems are a very interesting way to examine your AI and can put some interesting behaviors in your AI. If you are thinking about writing a Utility system for your game and found this interesting I highly recommend Dave Mark's book Behavioral Mathematics For Game AI. This is a good system for AI and I would like to see different types of games use this system or elements from it, and as I mentioned above XCOM almost exclusively used this system for their game.


As always if there are any questions feel free to post a comment or send me and e-mail at justinmaio01@gmail.com and I hope you enjoied reading this or at least learned
something new :)


-Justin Maio

Thursday, April 11, 2013

AI Post Mortem (Game: Nexus)

Overview

I started early fall on a game team called Paragon where we built a top down networked multi-player shooter in a custom 3D game engine developed by our team. Using a component based engine and some previous knowledge from a past game I had worked on I was tasked with writing a bot system for our game. From this I had to learn and understand the following things that I had never tackled before: Path finding, converting 2D steering into a 3D world, writing AI with components, creating behaviors with actions, and my first chance to create a AI architecture. I'm going to explain what challenges I faced, how I solved them, and how I would / will change things in the future. 

A website to our game can be found here www.enterthenexus.net


No Guts No Glory

So when I first accepted my position on the team as a dedicated AI programmer I had literally had no experience with a lot of the systems I mentioned above, in fact the only thing I had really worked on before was some simple steering behaviors and a few enemy behaviors. So you may be asking well how did you implement all of these features over the course of a couple months with school work and everything else going on, and I will tell you right now it wasn't easy.

The first step was always research, find out what you can and look at games or people who have done this kind of work before. Some of the material I was looking at the time was some of Killzone's pathfinding work, articles on path finding, and simply just watching bot matches of games I owned like Gears of War 3 and Call of Duty Black Ops II. This was crucial to see how bots in these games worked and gave me a starting point as reference of where I should be heading.


Second talk to people about what they have done or implemented before. For me I was in a very fortunate position at Digipen to have professors like Ben Ellinger or Steve Rabin to help out and reference when I needed help. As well as fellow students enrolled in our masters program. But really if you can find anyone who has implemented anything at all related to what your doing it can help you a lot to talk to them.

Lastly as the title explains guts and passion. Taking on a huge task of finding out all of this personally drove me to work and learn as much as possible, so having a passion for writing game AI definitely helped me out there.

Path finding

The path finding in Nexus wasn't anything to write home about because after all it was my first time writing it after all, but many things were learned and discovered along the way.

First was to determine on whether or not to use a grid or mesh structure for representing the world. This was being passed around the team for a while until a helpful alumni mentioned that it would be easier to set up and get a grid based solution running faster, and since we didn't have a truly 3D world there wasn't a need to over complicate things.



So discovering this I decided on a grid based system that I then wrote and A* algorithm to run on the grid and find destinations. If your not aware of what A* and are just getting into game programming Google A* or Dijkstra's algorithm as a start, and there also some good AI books like Artificial Intelligence for Games by Ian Millington that can help you out. Once that was working we need a way to add in the environment objects into the grid and I wanted it to work with anything we threw at it. So actually taking a lesson from graphics I needed to transform the vertices of the environment obstacles into my AI's grid space and then I used a line algorithm to fill in the grid spaces that where marked by those objects, for this you can look up Bresenham's line algorithm. This was a great solution because when the map was loaded in, the objects that affect the AI were then loaded into the grid one object at a time and I didn't have to know exactly where all the piece were so it was completely data driven.

The last thing that we ran into now that we had working path finding and obstacles being successfully generated we needed a way for the AI to find its way around the map independently. To solve this a waypoint system was created for our level editor where we would manually place the points in the map that could potentially be high traffic areas or places to just cover the entire map. This allowed some control over where the bots could go to find other player and stopped them from trying to accidentally go out side the map.

2D to 3D problems

One of the more interesting challenges was learning to write steering in 3D. Ironically most of the math turned out to be the same or similar, so you might be wondering, Justin why is there a section for 2D to 3D then? All I have to say to that is Quaternions.

I personally had never worked with a Quaternion in my life and that was the case for my whole team it was a new concept to all of us, so trying to understand a structure that rotates things and is composed of imaginary numbers was a little difficult to grasp. Most of the steering behavior were easily made in our 3D world until things needed to be rotated.  For this we had multiple different functions to handle things using Vectors, Quaternions, and Matrices. In the end the best thing we ended up with and with the help of masters students we got linear interpolation or Lerp which would provide smooth rotation of anything that needed to be rotated. Moral of this story if you are going to use Quaternions make sure you do your research.

Writing AI for a Component Based Engine


This was one of the more interesting things I learned writing AI for Nexus. If you are unfamiliar with a component based engine I can give you a quick rundown on how it works and how ours generally works. Basically instead of writing an inheritance chain of objects you have a single object that adds components. Well what do you mean by that you might be asking? Basically everything is an entity and you add the components you need to make it work. So for example in Nexus we have tanks, well every tank needs physics, graphics, a method of control, a gun or turret, and sound emitters, so those are the components that get registered to the tank. If some thing ever needs more functionality we just make a component and just attach it to the entity. Our Engine runs by having a series of systems and components that work together. Systems handle the components and the entire engine handles the systems, but this is an AI post mortem so enough about engines :p


Writing this into our AI engine it boiled down to actually 3 major components (there is a much better way to do this and I will explain it later in this post) Steering, Data, and Logic.

Steering handled all of the movement for that AI, Logic handled all of it's behaviors, and the Data held all of the information that the AI would need to know like it's current target, how fast it can move or rotate... etc. All of these components were then manage by and AI system that updated each component with the right information and handled any shared data between the components. This worked well for Nexus and got the job done for what we needed it to do.

Building Bot Behaviors

This was probably what I had spent most of my time doing for Nexus was trying to make challenging bots for our game, and of course it was the part that ran into the most problems.


I'm going to start out by saying our game wasn't always just team deathmatch in fact it was supposed to be exclusively a game where you capture crates. I spent a long time writing code for bots to path to a crate and get them to successfully push them back to base so they would then be captured for points. Unfortunately, but in reality it was a much better decision, by the time I got this working in the game we scrapped the idea due to it's lack of fun. Turns out nobody wants to push crates around the map and it's not fun... that was a duh moment for our entire team when we found that out and we had to  re write the AI for the game.


So what happened and what went wrong in this crate pushing bot? You might be wondering in case you want bots to push stuff. Well here is what we tired, the first attempt we wrote a tracking system that when a bot would see a crate it would save that object into it's data and then generate a path from the crate to the capture point then we calculated the direction the crate had to travel to get to the zone and flipped it so it would point behind it and found the point behind the box to which the bot would travel to. The bot would then upon reaching that spot move along the crate path until the crate was captured. Seems good right now, but there were lots of edge cases. For example what if the crate is in the corner and you can't get behind it? Or when the bot gets to the crate what if some one moved it? Or what happens if the AI loses the crate or is to far away from it? Some of these were fixed by adding timers or distance checks with the crate to see if it was worth re-pathing to the crate or not or if they should just ignore the crate and fight people if the crate was lost. The most difficult one was actually crates in corners, not only was this a problem for AI but for players as well. We ended up adding a "magnet" that would pull crates into your tank sticking to the front of them. This also made the pathing easier because once an AI had control of the crate he could path back himself and I didn't need to generate a path from the crate anymore. This did end up working, but like I said it all got scapped anyway and sometimes you just have to watch your babies die :/


Once that idea was scrapped all I had to do is write deathmatch bots which were much simpler. We used a series of actions to handle this along side of signals for line of sight and taking damage. These actions were boiled down to path find, find target, attack target, and use ability. This allowed the AI to do multiple things at the same time in a action list with multiple lanes. To keep things fair as well we added gaussian aiming for the which is a fancy way of saying the aiming accuracy was pulled from a normal distribution. This allows for a more controlled random and realistic looking shot for the AI. With the abilities we wrote a utility system to score each ability according to variable like health, distance from the target, and the damage or threat level of the target. If you don't know what utility is there is a great book called Behavioral Mathematics for Game AI by Dave Mark that I read while working on the game that explains all you need to know about it.

Now one thing that I personally felt wrong about doing for Nexus, but in turn it happened to make the game more fun for some people was I was required to change the bots from being more human to more AI. Once again you are probably saying wait but don't you want it to be AI? What's wrong about it being more AI? Well bots are supposed to be random and you want them to best simulate things that players would do and not what AI does. So part of a design constraint forced us or rather me to write the AI to be more predictable, so now each bot has a distinct play style if you will rather than keeping it more random. However, if it makes your players happy then it must have been the right change.

The Future and Fixes

So here are the real lessons learned from this experience I guess the things I would change and why I would change them or how things could have been better now knowing how naive I was when I started this project.

Fix 1: Waypoints and path finding


For one my path finding was never time sliced and would sometimes hog lots of processing power from the game and mistake I will never make again. Not that this killed our game, but if we had maybe 16 -20 players or bots rather it could potentially have killed the game's frame rate, so if you can time slice your path finding do it.


Waypoints were good but the problem was it relied on someone manually placing them into the map for the AI to work if there were no waypoints the AI can't go anywhere. There are 2 quick solutions I thought of one of which still involves waypoints and automatically generating them a little more complex, but at least no one has to place them manually. The second more effective solution I came up with is use influence maps, and these could be anything. For example move towards locations with high influences of death, opponent team density, or even just places that have had traffic in a while and that would make the AI more believable than travel to set places in the map.




Fix 2: Components and structure


In retrospect almost everything should have a timed update or delay because the AI doesn't have to be calculated every single frame in fact why not every second or half second the player wont even notice and your saving some processing power good job you :D


Also some components got bloated really fast the logic component was over 1000 lines of code and it could have been split into a better architecture such as follows: Steering component, pathing component, sensor component, behavior/ logic component, data component, and utility component. This would keep the code more modular and makes it clearer what is going on. If you take the time to really plan out and examine what your AI will be doing it will be easier to write the System structure of how it will all work and make it easier to understand.




Fix 3: Utility and Actions



Although utility was a very late addition to the game it was done in a messy way, and could be improved because it was really only used for abilities. The system itself was a rushed extension as well so even though it worked it could have been done better. The first thing I would change was to apply it to all of the actions and not just the ability this would have made for some more interesting AI. For example just adding it to the AI targeting action would have chosen more interesting targets, and a little quick restructuring of the action to handle the score for all of the them would have made things more interesting to. Also I didn't have the time to make it, but if you can make a tool to visualize your utility functions it will be much easier to create and manage your functions and tweak your values for each action you are scoring.


That pretty much sums up my experience working on the AI system for Nexus if anyone who reads this has any questions you can e-mail me at justinmaio01@gmail.com
Also if anyone is interested in a more in depth detailing of any concepts or system described above or one anyone would like to more about let me know and I'll write up whatever knowledge I can about the topic. :)

Also in the next couple of months I will be doing some independent AI projects as well as starting work on my next project which is a co-op networked game that I plan to implement nav meshes, squad tactics, and player modeling to so I will have some information about that over the next couple months.


Justin Maio