🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Looking for Critiques

Started by
8 comments, last by IADaveMark 6 years ago

Here's a simple model for Game AI that I've come to use. I would appreciate hearing your thoughts on it, positive or negative.

I like that it scales nicely between different types of projects. It also abstracts the idea of the AI problem being solved from the specific technique used to solve it.

How does this map to the way you approach AI in your projects? Thank you for any input. :)

 

GameAI2.thumb.png.ebfb62b29ea3fca77d6ebbf647d9aa3e.png

Advertisement

It's very common to use layers of multiple AI algorithms or tools to control characters. In particular, the idea of using utility values to choose high-level activities and then a state-machine like system (e.g hierarchical finite state machine, or behaviour trees) to execute them is a common and reasonable approach. If you need a planner in between (e.g. HTNs, GOAP) for your problems then that is fine too, although often not necessary as planning can often be done implicitly via the utility functions or embedded within the task execution itself.

What I haven't seen before is the idea that these algorithms exist in some sort of cycle; it's not clear to me how that would work in practice.

This is just a normal AI state machine, you show here. It isn't what people call high-level, but I bet high-level AI do have a lot of state machines; so maybe the first steps to a high-level AI.

There is no real criticism to provide year because this is a good idea. State machines are the back bone of many components in game engines. They make a lot of sense when you look at them and is very flexible.

 

The most common problem with any system like this is dependency, a bug in the code effects the whole system instead of just a part of it. So you need some way to see what is happening where to debug the code. Making debugging slow.

Some people solve this with a visualizer that allows you to see how properties change over time and how they effect other code.

 

**Gamedev didn't tell me someone responded, a bug or part of the new system?**

52 minutes ago, Scouting Ninja said:

This is just a normal AI state machine, you show here. It isn't what people call high-level, but I bet high-level AI do have a lot of state machines; so maybe the first steps to a high-level AI.

Hey Ninja. My apologies for making a misleading diagram. It doesn't depict a state machine. The arrows don't represent an agent transitioning between states.

Kylotan has it right- this shows a contextual model of how ai problems are divided into different domains, and how those different domains relate to each other to govern different parts of an agent's Ai.

Let me see if I can do a better job here. As an example, let's say you have an NPC agent:

  1. All of the agent's behaviors are carried out by a 'Task Execution' system.
  2. Maybe that agent is executing a behavior which listens for an event ("Enemy within range", let's say), then decides on a response.
  3. In order to respond, it pings a separate 'Goal Forming' system to choose a new goal for the agent, based on the current game state. This goal is then returned to the behavior. This goal is actually a target game state, which the agent would like to achieve- For example: one in which the enemy's Health == 0.
  4. The behavior then needs a sequence of behaviors which will actually form the response. It passes the goal game state to a Planner system, which looks through a wide space of available behaviors, and returns a sequence of them which can transform the current game state into the target (goal) state. Example: ["Run to Enemy", "Kill Enemy"]
  5. The behavior then hands this plan of behaviors back to the Execution system, which sets about performing those behaviors.

The last point is that these Systems (Execution, Goal Forming, Task Planning) are abstracted from the specific ai techniques used to accomplish them. In the diagram, A Utility AI is being mapped to the Goal Forming domain. But in another project, a different technique might be used to choose goals. In that case, the AI technique may have changed, but the 'Goal Forming' purpose of the domain would not.

Any better?

5 hours ago, Henrik Luus said:

It doesn't depict a state machine.

This description is better. However that only makes me think that debugging will be even more difficult.

5 hours ago, Henrik Luus said:

The behavior then needs a sequence of behaviors which will actually form the response. It passes the goal game state to a Planner system, which looks through a wide space of available behaviors

So the same as forming a sentence. Just like there are Subjects, Verbs and Nouns you use similar categories to form a "idea" for your AI?

Quote

So the same as forming a sentence. Just like there are Subjects, Verbs and Nouns you use similar categories to form a "idea" for your AI?

Kind of! What I was describing there is just the function of a planner AI- to build a sequence of tasks which can accomplish some goal.

One major problem I'm seeing just by scanning the thread is that often you will have a new "thing to do" before you execution finishes. Sure, you could continue performing your execution until it completes but then you have the commonly seen problem of the agent flailing away at where an enemy used to be because it has no way of exiting that execution.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Thanks, @IADaveMark:)
 
You’re absolutely right. The problem you described is one of the reasons I've spun task execution into its own system, which operates independently of whatever’s doing the planning. In my code, this system is really the engine which drives everything else agent-ai-related. If anyone’s willing, I would appreciate your feedback on the approach - it’s been doing a good job, but I’m not sure how similar it is to existing solutions out there which might be better:

- - -
 
1. To handle task execution, I use a pattern I’ve been calling ‘behavior queues’. A behavior queue is a queue structure (first in, first out), which can hold a sequence of one or more agent behaviors.

(Skip the rest of this explanation if you'd rather see a visual version, below.)
 
2. Only the oldest behavior in the queue is active at one time. When the active behavior notifies the queue that it’s complete, that behavior is popped from the queue, and the next behavior in line is activated. By this process, an agent performs behaviors.
 
3. An agent can own multiple behavior queues, and this is where the more complex problem solving comes in: Each of an agent’s queues has an Id, and these Ids can be themed to different, specific purposes.
 
4. For example: One queue, “Listen for Enemies”, could have a length of 1, and run a single behavior which never completes. This behavior would listen for enemy game events, for example, “Enemy within range”. Upon catching such an event, it could ask a separate planner system for an appropriate response, in the form of a sequence of behaviors. For example, this response might look like:
 
[“Move to Enemy”, “Kill Enemy”]
 
5. The “Listen for Enemies” queue could then take that sequence of ‘response’ behaviors, and add them to a second, “Respond to Enemies” behavior queue. That sequence of behaviors would then immediately begin executing.
 
6. Crucially, the original “Listen for Enemies” behavior would still be active, back in its own queue. If another, bigger enemy appeared, it could ask the planner for the new set of response behaviors, clear out any remaining behaviors in the “Response” queue, and replace them with the new sequence.
 
7. This “Listen for Enemies” behavior could also listen for enemy death events, and clear the “Response” queue if the target enemy has died. That approach is how I’ve generally avoided the “agent whacking away at nothing” problem you described in your comment.
 
- - -
 
A few more notes about behavior queues, for posterity:
  • In my implementation, each behavior queue has its own blackboard structure, which its active behavior can use to store some state data. The agent itself also has a blackboard, which multiple behaviors can use to communicate with each other.
  • In my code, each behavior queue can have a set of tags. This makes it simple to do thing like “pause all behavior queues dealing with movement”, for example.
  • beyond clearing a behavior queue, a new behavior can be inserted at any index in a queue. So for example, an agent with a plan to interact with ‘thing A’, and then ‘thing B’ could have new behaviors inserted in the middle of the queue to give it a third stop along the way.

Do you have any thoughts on this pattern? How does this compare to what's already out there for handling task execution work?

Regardless, thank you for reading such a long post. :)

(Apologies for the image spam below. I don't know how to make these appear smaller. ?
)

BQ-EX-01.png

BQ-EX-02.png

BQ-EX-03.png

BQ-EX-04.png

BQ-EX-05.png

BQ-EX-06.png

BQ-EX-07.png

BQ-EX-08.png

That's my point, however. You have a queue of things you want to do in order to address a problem that you assessed... but as you are part way through that queue of actions, the situation may have changed. Are you going to finish them regardless? Flush the queue and readdress the situation? The first can lead to ridiculous behaviors that are no longer relevant. The latter means you are still having to calculate what you should be doing to see if you need to flush the queue and dump more in. If you are running that algo on a regular basis, you aren't that much different from a single-ply system like any other AI architecture (e.g. behavior tree, utility system, etc.). 

As an example, Monolith used GOAP on Shadow of Mordor... but when I asked what their average plan length was, it was 1.2. Therefore, they weren't actually coming up with "plans" (i.e. sequences of actions)... they were doing all that work for a single behavior to act on next. (They also dropped using GOAP right after that, IIRC.) On the other hand, the reason most people dropped GOAP is they were coming up with long plans that would get thrown away and rebuilt the next cycle because the world state had changed enough that it was either no longer a valid plan or wasn't the best thing to do. So they were doing a bunch of work to determine what to do over time and it was never getting done... just thrown away on the next think cycle.

So what I'm getting at is, why are you doing this? What is it that you think your advantage is? Also... and certainly relevant but hopefully not obnoxious to you... what other AI algos have you ever implemented? What is it that you are avoiding from those?

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement