🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

NN questions

Started by
8 comments, last by jorgander 21 years, 6 months ago
i''d like to design a NN ai for an fps i''m making, and there are certain specific features i would like it to have. altho i know exactly how NNs work and how to implement them in software, i''ve never actually done it, whether for game ai or otherwise. i suppose my questions would concern the feasibility of the features i want. as for inputs, i would like the NN to hear sounds, such as explosions, footsteps, etc. from a certain direction and volume. i would also like it to consider things it can "see", such as items and other players. of course, making it only only "see" certain things will mean tying the ai into the occlusion culling portion of the code. i''d also like it to build a map of the level, complete with item positions. so, if it is low on health, it will go to the health, etc. of course, this map will not be part of the NN, but will work with it closely. since i''m trying to make the game code as modular as possible, i would like for the NN to have, as outputs, all the actions that a human player has. that is, the NN could move forward, backward, left, right, jump, etc. and rotate left/right and up/down. so, in theory, i could plug the neural network outputs into the same code that handles human input. to someone who is at least somewhat familiar with NN''s, does this idea(s) sound feasible? would it have to be so big or complex that it would not be practical to have it run every frame or even every 5 or 10 frames? or even if it would be fast enough, are some of the above-mentioned features just not possible with a NN (like i said, i have never actually made one)?
Advertisement
Hi,

I guess it is certainly feasible. The only problem is that the neural network will be gigantic. Any idea how many inputs you need for sound and vision input? It will not be possible to use it real-time on a standard pc and at the same time run a nice 3D-engine. Of course, you should experiment. Maybe, small ANNs can be used for specific tasks, like deciding which strategy to use based on the current score and some history info of the opponents. The strategies could then be predefined scripts.

I think it would be more interesting in developing such a AI and see what it can do instead of hoping it will improve gameplay of a fps. Genetic algorithms for example are better suited for this problem. Because you can let them build AI scripts offline and use them in your game. You will get a reasonable result much faster than ANN''s without any performance issues. You can even combine GAs and ANNs. Let the GAs find out wich ANN is best suited for your problem.

You should also take into account that ANNs are for learning and are plain stupid at start. Will you ship the game with an ANN already at a basic level?

I hope you have some spare time,
you will need it

david.


for sight, the NN would need as many inputs as there are players in the game minus one (doesn''t need to see itself), plus the number of items. for sounds, it would need as many inputs as there are players minus one, multiplied by the number of sounds a player can make at one time (e.g. a player can shoot and taunt at the same time).

in a 12 player game with, say, 20 items in the level, the number of inputs would be at least 42 (just for sensory input). however, not all of these inputs would be used at once. unless the NN could see every other player, see every item, and every player were creating sounds, it wouldn''t have all it''s inputs "activated". even still, is this "gigantic"? once again, pardon my naivette.

if it were true that a whole lot of input were present, the NN would take more time to make decisions, and the game program would in turn give it less time. it seems to me that this would create an effect in which the NN would appear to get confused when a lot is happing around it (much like a human would).

but, it would need other inputs as well (perhaps a lot). for example, it would need to know its position and orientation, its currently selected weapon, its current action (i.e. jumping, crouching), and possibly other physics-related stuff.

of course, i''m sorta talking outta my a$$ here because i dont actually know if these are practical applications for NNs. the main reason i wanted to implement the ai with NNs is the simple fact that they learn. so given any environment and any situation (realistically speaking), they can adapt to it.

i''m still in school and this game is an undergraduate research project i''m working on with another student, so unless some company wants to contract us or something, it will never get shipped. but even though it''s not a commercial game, i still want to take it as seriously as possible given my time schedule and skill level, and i currently have every part of the game designed except the ai.
For starters, see my website, then go to www.ai-depot.com and read Alex''s bot navigation articles. By then you will have a much better idea of what neural nets are about and what you can expect to achieve with them.

Have fun!



ai-junkie.com
i am already familiar with your site; it is the reason i know most of what i know about NNs. thank u very much for putting the time and effort into it.

also, thanks for referring me to the articles on ai-depot. they are exactly what i am looking for!
quote:
jorgander wrote:

altho i know exactly how NNs work and how to implement them in software, i''ve never actually done it


Your idéa is rather intresting but if you never have actually coded a NN I would say that the place to start is with simple things. Even a simple net for collision avoidance (or anything) can be a littlie tricky to get working. Your real problem is as mentioned the number of inputs you need (I really don''t think that even 250 inputs will do.. :\ ), but also the problem of training such a network will be quite a challenge.

To sum up: Start soft with some simple NN:s that lerns simple things...

Have you considered the use of genetic algorithms? They can be made rather quick if you code them accurately... and they also learns.





// Javelin
-- Why do something today when you can do it tomorrow... --
// Javelin// Assumption is the mother of all fuckups...
I also consider training a serious problem.
IMO it''s a much simpler and well documented approach to use a combination of state machine, decision tree and ANN.
The state machine could control general states (e.g. "wounded", "searching for enemy", etc.), while the decision tree could handle state transitions.

The ANN could be used to model high-level ''thinking'' like decision planning and simple tactics. You would need only a few inputs (e.g. a success measure for the last plan and the actions
taken) and the system would train itself over time.

just my 0.02€,
pat
javelin:

i dont think that my idea would need as many as 250 inputs. sure, it may need a lot relative to what NNs usually use, but i dunno about 250. but, something that i would like to know is how many inputs is "too many", given restrictions such as a ceiling on the number of hidden layers, simple step funcions, and processing time that is given to the NN (i.e. lots of other things will be using the processor, not just the NN)?

the reason i ask is that i''ve recently thought of other uses for NNs in relation to bot control, namely skeletal character animation (something else i''m into). but, well, i wont bore everyone by making an extensive post, just that it would be trained heavily with GA and would depend on how many things the NN can be considering at a given iteration.

i intend to start simple with something like obstacle avoidance. with every new addition to my game, i time exactly how long it takes to see if it is practical or not (e.g. i recently implemented soft skinning), and the bot control will be no exception. fup pointed me to ai-depot, which i had not previously known about and has proved invaluable in learning how to go about creating and training different aspects of the AI.
Hi

quote:
something that i would like to know is how many inputs is "too many", given restrictions such as a ceiling on the number of hidden layers, simple step funcions, and processing time that is given to the NN (i.e. lots of other things will be using the processor, not just the NN)?


According to hidden layers, there are very seldom any use for more than one.

Even though the net can be fast it''s very dependent on your implementation (and skill ). I have never coded a NN as a part of a bigger program so I don''t know if it will work in your case, but be prepared that it can take much more time than you think. The only "realtime" NN I''ve created was a simple 2D collision avoidance system and then I got mabye 100-200 net updates per second. That net had 9 inputs and one hidden layer.




// Javelin
-- Why do something today when you can do it tomorrow... --
// Javelin// Assumption is the mother of all fuckups...
with my simulator I get 6-16 MCPS ( million connections per second ( in the recall phase ) ) on my 466celeron using simple feedforward nets with some hidden layers. ( depends also on the size of the net, therefore 6-16MCPS ). So count your number of connections and you can get a rough idea of how long it may take to calculate the net and how many calculations are possible per second.

@$3.1415rin

This topic is closed to new replies.

Advertisement