🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

An AI question.

Started by
22 comments, last by KingMolson 22 years, 5 months ago
I guess the easiest way to do it would be too have the results for each ability, the actual action too take and the prequisites. The system then checks through the abilities until it finds one that satisfies the goal, it then checks the prequisites, if these are not met then it runs through the abilities again too find things too meet those prequisites, after all prequisites are met a plan is considered finished and it is executed. Should the system determine that a abilitiy has become impossible (Ie a quit ability timer(if it takes too long)) or conditions in the ability that would make it impossible, it would reformulate the plan from that point.
Advertisement
The Excalibur link is a good one for planning as a form of constraint-based search. There are other methods around, so don''t just limit yourself to this one... even though it is a good one for finite-state worlds

quote: Original post by Nazrix

I guess my major question is how could an agent might, for instance, know that "eating" would "satisfy hunger" or "increase energy" or whatever unless we make that a rule of the world? Hope I am making some sense.

If we make that a rule of the world:
When you''re hungry, eat

then it starts looking like scripting...especially when you get into more complex things like if you want power, take over the throne

I know things could be more fuzzy and less exact when it comes to rules but there must be some fairly definite rules introduced by the designer...at least it seems that way to me


The sorts of rules you are talking about define your world model. At some level these rules need to encoded in the game. It is not necessary that the agent know these rules, but if they don''t then they need a method of being able to either a) learn the rules; or b) learn stimulus-response pairs that depend on the rule.

So what does this mean?

Learning the rules means that the agent has a method of learning causal relationships about the world. It would need, for example, the concepts of hunger and food and the action eating and would need to learn a causal relationship that reflects that eating(food) reduces hunger. We could write this in a script style as: reduces(hunger,eating(food)) or more generally as reduces(f,a(o)); which could be read as the action a acting on object o causes a reduction in f. This is just an example I whipped up, rather than a paradigm of planning. You could write dozens or even hundreds of these rules for your world and allow the agent to learn some of them by experimenting in your world. Then once it knows a rule it can apply it to perform reasoning.

Alternatively, you could use factors like hunger as a utility function. Rather than having the agent learn rules, it learns stimulus-response relationships that maximise its utility (or a value function). The value function simply measures the quality of performing that action in that state. An excellent example of this is the Creatures agent function, which was implemented as an artificial neural net with nine niches.

Let''s assume now that the agent knows a world model, either explicitly or as implicit stimulus-response pairs. How do you perform planning. There are two approaches: one is a search through the state space of the domain, the other is a search through the plan space. State space searches are easier to understand and implement, so let''s look at one.

Assume that your agent is in the state tired and it seeks a plan to be in the state energetic so it can perform some task. A state space search would start with a node tired and expand against all possible actions that the agent could perform to generate successor nodes. You continue to expand the search tree using a search algorithm until you find a goal node, which is the state energetic . The sequence of actions corresponding to the path from the root node to the goal node is your plan.

Alternatively, you might assign a cost to paths through the state-space graph. For example, if the actions are requirement movement of the agent, then the costs might be based on distance, time, energy expended, etc. Choosing the path with the lowest cost that satisfies the goal is always a good idea.

This is a very simple look at planning as search. There are many issues that you need to consider; like the size of the search tree, whether action outcomes are certain, whether exogenous factors will affect the state transitions, etc.

If there is uncertainty in the outcome of the plan, then you can consider decision-theoretic planning. The Expected Utility of a plan is the utility of being in the goal state multiplied by the probability that the plan achieves that goal state. Choosing the plan that maximises the Expected Utility is the basis of rational action.

I suggest that you do some reading in the field to get a better understanding than my few lines can offer. As I said before, I will be happy to answer further questions.

Cheers,

Timkin
quote: Original post by Timkin
The sorts of rules you are talking about define your world model. At some level these rules need to encoded in the game. It is not necessary that the agent know these rules, but if they don't then they need a method of being able to either
a) learn the rules; or b) learn stimulus-response pairs that depend on the rule.


yes I see what you mean



Thanks, TimKin. I know you cannot explain everything in one single post but thanks for the introduction. I was thinking along those lines intuitively but it's nice to hear a more formal introduction to the concept.

A CRPG in development...

Need help? Well, go FAQ yourself.




Edited by - Nazrix on January 28, 2002 9:24:12 PM
Need help? Well, go FAQ yourself. "Just don't look at the hole." -- Unspoken_Magi
i had written a lengthy post, but i was apparently disconnected when i hit post, and lost my data. so, heres my main points:

-------a#1 fundamental rules:
1) an ai can only be as complex as its environment.
2) Input/feedback, output, and data are the fundamental agents of thought.
3) An ai can only be as complex as its ability to gain or use input/feedback, output, and data.
4) Learning occurs through the observation of input and feedback.
5) Learning occurs through association.
6) Context provides foundations for learning.
7) Instincts and fundamental rules are required to create contexts.
8) Instincts are required to create drives.
9) Drives are required for motivation.

also, i recomend a book for general theory, "hidden order: how adaption builds complexity", by john h holland.

This topic is closed to new replies.

Advertisement