Advertisement

We are not alone!

Started by May 16, 2003 12:28 PM
11 comments, last by IADaveMark 21 years, 3 months ago
I had the opportunity yesterday to ask Will Wright (Sims), Peter Molyneux (Black & White), and David Jones (Grand Theft Auto) if - as the size and scope of their virtual worlds was increasing dramatically - they thought that managing emergent behavior was going to be the primary challenge or obstruction. They all are, to a degree, rather frightened/intruiged by that very issue. I''m glad that they are just as anxious about that as I am. Dave Mark - President and Lead Designer Intrinsic Algorithm - "Reducing the world to mathematical equations!"

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Well, you can debug code easily enough. Statement does wrong thing, modify code, statement does correct thing.

But it''s hard to even know what the emergent behavior will be until it all goes into effect, and then the ordeal to tweak it so that the wrong things don''t interfere with each other must be a bear if it isn''t well planned out.

I mean, you could try it all out on paper and prove it workable beforehand, but YEESH!!!

I wonder how they plan it beforehand, to the extent that they can...
It's not what you're taught, it's what you learn.
Advertisement
quote: Original post by Waverider
I mean, you could try it all out on paper and prove it workable beforehand, but YEESH!!!

I wonder how they plan it beforehand, to the extent that they can...


Actually, the term emergent behavior was coined to describe behavior that was not planned, or tried out beforehand, or proven. It is behavior that emerges from other programming.

Look up steering behaviors and flocking using google and read about the classic example of emergent behavior if you are interested in how it is done.

Eric
Peter Molyneux''s great comment (in his delightful accent) was that there is something horrifying about feeling that this robust system of yours could always be [this close] from collapsing on itself.

Dave Mark - President and Lead Designer
Intrinsic Algorithm - "Reducing the world to mathematical equations!"

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

quote: Original post by InnocuousFox
Peter Molyneux''s great comment (in his delightful accent) was that there is something horrifying about feeling that this robust system of yours could always be [this close] from collapsing on itself.


Ooh, I like that line...

That would definitely be an interesting research idea... the stability analysis of interacting agents. Intuitively I don''t believe that these systems are as close to the edge as their developers might think... but then, the whole point is that we really don''t know where that edge is!

Cheers,

Timkin
It would be important to make the distinction between logical bugs in programming systems and interactions resulting from complex systems. For example, the often cited situation in Black and White where a character tried to eat itself as it was the most nutritious thing near would be a logical system design problem (''exclude self when thinking about what to eat'')

Emergence exists inside a certain possibility space; as long as the people working with it know the shape of the space, they should be fine. The trick seems like it will be keeping bugs from artificially expanding or narrowing the selected space.
Advertisement
quote: Original post by cannelbrae
Emergence exists inside a certain possibility space...


Yes, but the size of the state spaces we are talking about is bloody huge! Understanding the behaviour of your agents in all regions of this space is beyond the scope of any Q&A team... one hopes that the behaviours aren''t too wild in the extreme ''badlands'' of the state space, when the game ships, so that any weird behaviour will be only be seen very rarely and can be fixed in a patch or sequel/expansion, if it really is that bad.

Timkin
(Off topic)
quote: Original post by cannelbrae
For example, the often cited situation in Black and White where a character tried to eat itself as it was the most nutritious thing near would be a logical system design problem (''exclude self when thinking about what to eat'')

Nah... surely that instead of introducing special cases, keep it generic: pain of being eaten should outweigh the gain from eating, thus discouraging the behaviour. Simple behavioural psychology.



[ MSVC Fixes | STL Docs | SDL | Game AI | Sockets | C++ Faq Lite | Boost
Asking Questions | Organising code files | My stuff | Tiny XML | STLPort]
quote:
pain of being eaten should outweigh the gain from eating, thus discouraging the behaviour. Simple behavioural psychology.


I have to say, I''ve never really thought about the concept of eating oneself because its the most nutritious thing nearby and I have to admit to having a chuckle. The logic is flawless, if lacking in common sense knowledge.

As for pain outweighing gain from eating oneself, consider a creature that didn''t feel much pain at all but had a huge appetite. You will eventually have to make a special case the other way, for a creature that didn''t feel pain quite as much as the other.

No, the solution is that a character interact with his environment in the context of self, not with self as the subject. In the little loop that goes around collecting items to examine for closeness and nutrition, obviously the self should not be there!


quote: Original post by Robbo

No, the solution is that a character interact with his environment in the context of self, not with self as the subject.


It is only when the character includes self as a subject (such that self is an object in the character''s internal representation of the world) that self-consciousness is possible.

And we all want our super-duper-extra-special-next-next-gen AI to be self-conscious don''t we ;-)

Seriously though, including self in some determinations during planning is often necessary depending on the behaviour. I''m sure starving people have considered whether eating their own flesh might help them survive that little bit longer.

quote: Original post by InnocuousFox

Peter Molyneux''s great comment (in his delightful accent) was that there is something horrifying about feeling that this robust system of yours could always be [this close] from collapsing on itself.


Delightful accent?

I''ll tell him you said that ;-)

In truth, a system''s degree of robustness can only be tested under an alarmingly small subset of the number of possible situations that it may face in final release (given x hours of testing and several thousand times x hours in the hands of evil-minded-intent-on-breaking-the-simulation gamers). Any novel predicament is the stuff of nightmares, which is often why such initially open ended games often end up being constrained to the necessities of testability.

This topic is closed to new replies.

Advertisement