🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Research ideas?

Started by
4 comments, last by Timkin 21 years, 9 months ago
Okay, time again to pick your brains. I''m submitting a research proposal as part of a job application I''m putting together. The position is researching AI for computer games (in a University environment) and involves strong links with the Australian computer game industry. My particular interests - and what I''m leaning toward - are developing models and algorithms for Cooperative Agents for Tactical Advantage. There are plenty of other ideas that I could go for (emotional models, human behaviours, etc) and others that I may not have thought of yet or that don''t spring to mind immediately. What I''d like to hear from you folks is ideas for *useful* games AI research... AI behaviours that would be meaningful in a computer game or other synthetic environment (for example, my work on cooperative agents has applications in military training systems and overlaps studies in biological systems... pack animals for instance). Furthermore, for those industry members reading this, are there particular AI aspects that your company is interested in but does not have the resources/expertise to develop. Organising collaborative research with industry is highly desirable in this position as it will be a founding position in a new Centre for Games Technology here in Australia (a University research fascility dedicated to developing next generation games technologies). All serious ideas will be considered! Thanks, Timkin
Advertisement
Timkin,

Good luck with your application. I understand the trials and tribulations that come with academic life - that's why I'm in industry now. 8^) Best wishes!!

As for research ideas, have you considered looking at the balance between redundancy and pleiotropy. Redundancy is when more than one agent can perform a particular task, and pleiotropy is when a single agent can perform multiple tasks. The typical trade off is cost vs. efficiency.

Pleiotropic agents cost more to maintain (higher salary due to higher degree or training/education), and they are not necessarily specialists in a single task (low task efficiency), but they contribute to overall productivity by eliminating the possibility of bottlenecks due to lack of particular skills (rate limiting step). Non-pleiotropic agents are less costly and are highly efficient at a particular task, however, they cannot serve multiple roles.

Redundancy is a back up plan. Have two agents capable of the same task and you get very high efficiency on that task and if one goes down, you have one to maintain a reasonable level of output.

Effectively, there must be some dynamic balance between these different types of agents as well as the degree of pleiotropy within the agents themselves and redundancy for the task at hand. Where does that balance exist??

Just some ideas...

-Kirk

[edited by - KirkD on September 19, 2002 8:10:24 AM]
@ Timkin, not to parrot your previous post (although i find that quite common when dealing with experts in a field i have a dim competence in.) Communication and interrelation between agents is definitely "where it is at" If you study the booming online multiplayer FPS games you''ll find two things

1) If you can handle the network code Online FPS are easier to code and result in better enemies(because they are other players)

2) AI agents primary downfall is the lack of a coherent strategy and inability to regroup and reform after an unforseen event breaks up their scripted attack.

Aristotle said "The sum of the whole is greater than the sum of it''s parts" - Not really so in computer AI, because it is "enabled" mathematically where 1 + 1 + 1 = 3, the sum of the whole = the sum of it''s parts. This hinderance is the greatest shortcoming of contemporary game AI (IMHO)
We achieve synergies via communication and planning, we extrapolate contingencies based on cause and effect, and expectations based on past experiences. Hard to code but easy to quantify. Tackling that issue is a worthy venture indeed.

Dreddnafious Maelstrom

"If I have seen further, it was by standing on the shoulders of Giants"

Sir Isaac Newton
"Let Us Now Try Liberty"-- Frederick Bastiat
quote: Original post by Dreddnafious Maelstrom
2) AI agents primary downfall is the lack of a coherent strategy and inability to regroup and reform after an unforseen event breaks up their scripted attack.


This is certainly something I'm particularly interested in. As I see it, it's the ability to replan strategic plans while maintaining adaptive, tactical behaviours. I did a lot of work on replanning in realtime for my PhD and so at some stage I will look at this problem in more detail...

Kirk: Interesting thoughts! One question that springs to mind is whether you would expect to find such traits (specialisation vs generalisation) within a single (non-human) species or is it just something that we humans have created because of our societal structure (which permits specialisation by reliance on others to complete the tasks we cannot)? BTW: I seem to recall that I've asked you this in the past, but what is it you're doing in industry these days?


Dredd, another question for you (given your background and knowledge). How would you define tactical advantage in a general sense? Feel free to use specific examples please if you think they'll highlight the general principle? I have my own ideas, but I am far less of an expert in this sort of thing and would like to hear your thoughts.

Thanks,

Timkin

[edited by - Timkin on September 19, 2002 10:31:17 PM]
Timkin,

Thanks! Industry these days consists of me keeping up with the constant reorganizations and refocusing. ugh. Actually, I spend a lot of time doing cheminformatics and bioinformatics modelling for a drug discovery company. My pet project is development of classifier trees (binary decision trees) via Evolutionary Programming. So far I'm getting results a load better than good old Recursive Partitioning - now I have to start writing it up.

Back to the question at hand:

quote: Original post by Timkin
One question that springs to mind is whether you would expect to find such traits (specialisation vs generalisation) within a single (non-human) species or is it just something that we humans have created because of our societal structure (which permits specialisation by reliance on others to complete the tasks we cannot)?


Being from the bio/chemistry world, I was actually thinking of cellular and subcellular agents. For example, enzymes involved in amino acid synthesis are often pleiotropic in that they catalyze multiple steps along the path to the final destination, but often they're slower than a series of enzymes dedicated to each specific step. In contrast, hormone receptors and other signalling paths are often redundant in that there are multiple receptors for the same hormone or multiple paths converge on a single target, however each can only respond to a single signal.

Of course, you could just as easily project this up into the corporate world and consider that workers fall into a few groups: 1) single task workers can master a single task very well but have difficulty with new tasks; 2) multitaskers can do multiple tasks with reasonable efficiency but cannot master any one as well as group (1). Then we get into the whole balance of cost vs. efficiency. I would suspect that in this case it would be best to have many workers from group (1) with your redundancy supplied by a few members of group (2). That way the system would function at maximal efficiency most of the time, and if you lost a specialized worker, it wouldn't grind to a halt (but would slow down a bit) as the group (2) worker filled in. What is the governing dynamic? 8^)

Ant colonies are another example. You have workers, explorers, defenders, and the queen. Each one serves a specialized purpose and the redundancy (except for the queen) is supplied by their sheer numbers. Slime mold is another interesting example. All individuals appear to be the same (redundancy) until it comes time to reproduce. At that time, some individuals specialize into stalk and some into bud cells.

At the risk of making a painfully long post, let's consider the game world. Suppose we have something similar to the ant colony: workers gathering food and maintaining the village, warriors defending from invaders, beaurocrats doing whatever it is that they do. Now, suppose all your workers are killed off by a raging flu epidemic. Can the warriors pick up the worker's skills well enough to restore the village? Or suppose your warriors are all killed off by an invader, can the workers learn to fight well enough to defend the village? Something of a trivial example, but I'm sure you see what I'm getting at.

I'll try to come up with another, more complex example....

-Kirk



[edited by - KirkD on September 20, 2002 8:54:53 AM]
Another idea which might be ok to look at is the idea of modelling the player, trying to get a good idea of what the player is most likely to do.

For instance, in some games, a player will have certain location/weapon preferences, or at a higher level, they may have strategies which they frequently employ (ground attack immediately follows an air strike 80% of the time type thing).

Once such a model of the player is built, there are a couple of different applications which spring to mind:
1) Have the AI target the weakness
2) Emphasise an aspect of the game which the player enjoys (if the player has a irrational desire for a certain weapon, instead of punishing them, change the game to make that weapon a focus of the game - make it cheaper, give more ammo etc)
3) Persistance in MMOGs - let the computer play the way the player played while they are away.

Just an idea. It has possibly already been done too

Trying is the first step towards failure.
Trying is the first step towards failure.

This topic is closed to new replies.

Advertisement