🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Markov

Started by
3 comments, last by Cho 22 years, 10 months ago
Hi, I want to know what is the utility of Markov algoritm for game AI. Thank
Advertisement
I dont know too much about the Markov Localization Algorithm (which I assume you''re talking about and not Markov chains) but it looks like you can use it to create a very complex AI that can find it''s way around an environment.

Using it would take in a lot of data, but it would be interesting to watch it go =)

Yea, well anyway, I guess it is just another way to have your AI navigate.

Invader X
Invader''s Realm
Mmh ? I never heard of Markov localiation, what the heck is that ? Do you have any references where I could read about that ?



Sancte Isidore ora pro nobis !
-----------------------------Sancte Isidore ora pro nobis !
Basically, the Markov Localization Algorithm is a way for a robot to determine where it is based on input data and the robot''s memory of what it has done.

Here''s a link to an explanation

Invader X
Invader''s Realm
'Markov Localistation' is just a spiffy name given to the problem of self-determination of state of an autonomous agent (usually a robot). The dynamic nature of the robot's and environments state transition functions are assumed Markovian and hence can be modelled using a Hidden Markov Model. Since the aim of the localisation problem is to infer a belief state (probability distribution over possible states) using a Markov model, then you can see where the name comes from.

As for why Markovian dynamics are useful in games... well, it's all about predictability. Markovian systems are such that the future is independant of the past, given the present. Meaning that if you know only the current state of the world and a transition function for how the world changes from one moment to another, then you can predict any future state. The power of this 'Markov assumption' is that you can utilise probability distributions to represent belief states and predict future belief states given only a prior distribution and a conditional distribution for the transition function (ie, p(x(t+dt)|x(t)). You can then incorporate evidence (observations) using Bayes' Rule to give you a method of inferring the state of the system given observations on only a subset of the system variables.

Such modelling methods find particular use where there is uncertainty involved in: a) the observations (eg a noisy sensor); and/or, b) the transition model (eg because it was simplified to become computationally tractable).

There is not much call for such methods in current games as the programmer of any game AI has access to the game state object at any time (meaning accurate observations of the state of the world and all agents). Hence, the only source of uncertainty is in the transition models of the game agents, game world state, and/or player. For the most part, it is only the latter that incorporates uncertainty (as the players future decisions (actions) are not known by the game.

On a slightly side note though: I can see a use for uncertainty representations in network games where the 'lag' (network delay) can cause uncertainty about the current position and state of a player. That is, they may have changed state since the last packet update but the client packet has not yet arrived at the server. This would imply that there is some uncertainty about the current state of the player. Packets arriving at the server would then be classified as observations (at discrete times) on the players state. Using such a model could help to ease the problems associated in FPS network games with lag.


Regards,

Timkin

Edited by - Timkin on August 16, 2001 1:47:15 AM

This topic is closed to new replies.

Advertisement