🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

NN for flightsim - will it work, unusual architecuture?

Started by
24 comments, last by kindfluffysteve 21 years, 6 months ago
My architecture for the network can best be described as a mess. Within a 3d volume, 100unitsx100x100, the positions and connections of at the moment, 1000 Neurons (with upto 4 connections each) with a dendrite reach of about 40 units to impose structure. And its all genetically programmed. plus, there is virtual biochemistry and opportunity to change structure through dendrite death and regrowth. This network I''m now training for my AI in a flight sim. The inputs to the network consist of, stuff like, velocity, position as well as target information. the outs are flight controls... do you think this will work? have others tried this? I chose to guess at a population of 50.
Advertisement
i dont suppose anybody here who knows how long a genetic algorithm will take before you know its going to work or not.

I''m leaving my agents to do their thing for 35 seconds at the moment, so each generation takes about an hour. So how long shall I leave them to ferment before I start asking questions?
This won''t help you, but I''m curious about your project, as I''m thinking about doing something similar. What are your agents doing in those 35 seconds? How is their fitness measured?
You are not the one beautiful and unique snowflake who, unlike the rest of us, doesn't have to go through the tedious and difficult process of science in order to establish the truth. You're as foolable as anyone else. And since you have taken no precautions to avoid fooling yourself, the self-evident fact that countless millions of humans before you have also fooled themselves leads me to the parsimonious belief that you have too.--Daniel Rutter
the fitness is measure like this:

a small + score is given for changable outputs - the idea here is to train them to actuall jitter about rather than glide.

there is a large penality for hitting the ground (hopefully after they learn to jitter, they will then start to avoid the ground)

3 planes are positioned somewhat randomly. 1 is friendly, 2 are foes and are controled by basic, if...then sort of AI.

There is no gunfire, however, there is a small rewards for flying close to friendly.

there is a large reward for flying close to and at the enemy (if its in a sort of view cone)

there is a penalty for being near to an enemy and within its view cone.

damn what i hate is bugs. after 7400 seconds - what happens? software error, obviously i can just run it again, but its difficult to find an error that only occurs after several hours.
Interesting. I was going to do something similar, but in several stages. In other words, I''d make the fitness test initially only look at those first two factors (basic survival). Then when there was some basic flying behaviour happening I''d introduce the other constraints. I have no idea if that would be a better system or not, but it seems to me that it might be easier to make predictions when there are only a few factors.

By the way, you might want to consider fuel efficiency as a basic survival factor as well.
You are not the one beautiful and unique snowflake who, unlike the rest of us, doesn't have to go through the tedious and difficult process of science in order to establish the truth. You're as foolable as anyone else. And since you have taken no precautions to avoid fooling yourself, the self-evident fact that countless millions of humans before you have also fooled themselves leads me to the parsimonious belief that you have too.--Daniel Rutter
An even better measure the pilot''s skill is the total Energy of the aircraft at any given time (potential + kinetic) (or the rate of energy loss compared to the opponent). In real combat situations, the pilot with the most energy at hand generally wins. Dogfighting is about conserving your own energy while forcing your opponent to dissipate theirs. There was a very famous dogfight during WWII between a German and a US ace (the names have eluded me at the moment). Neither could get an advantage over the other and eventually the two were flying below ground level in a large quarry. Finally they both conceded that there was not going to be a winner and flew off in opposite directions. The rest of the story has it that they met decades later.. the German was recounting the story at a function and the other pilot happened to over-hear the conversation! Freaky!

Anyway, you could compute the energy of each aircraft at any given time and compare the change during a particular interval between the two aircraft. I''m sure you could come up with a suitable raio and scaling function to apply the result to an objective function to measure pilot fitness.

Cheers,

Timkin
quote: Original post by kindfluffysteve
My architecture for the network can best be described as a mess.

Within a 3d volume, 100unitsx100x100, the positions and connections of at the moment, 1000 Neurons (with upto 4 connections each) with a dendrite reach of about 40 units to impose structure. And its all genetically programmed.

plus, there is virtual biochemistry and opportunity to change structure through dendrite death and regrowth.

This network I''m now training for my AI in a flight sim.

The inputs to the network consist of, stuff like, velocity, position as well as target information.

the outs are flight controls...

do you think this will work? have others tried this? I chose to guess at a population of 50.


you say your architecture is a mess and ask if it will work?
you''ll have to try it out yourself... i''ve never used that
large net myself, but i guess that unless you feed it a
horrible amount of data it''ll fail eventually (overlearning).
why did you build such a net? debugging it will be painful
if not impossible (at least for me).

and don''t use position as an input. unless you want the pilot
to do the same things when he arrives at that position.
hehehe... relative distances might be better inputs.
"you say your architecture is a mess and ask if it will work?
you''ll have to try it out yourself... i''ve never used that
large net myself, but i guess that unless you feed it a
horrible amount of data it''ll fail eventually (overlearning).
why did you build such a net? debugging it will be painful
if not impossible (at least for me)."

I felt that nature was free to do what it liked and wondered if giving the maximun resources for mess and consfusion, give more opportunity for a GA to exploit.

"and don''t use position as an input. unless you want the pilot
to do the same things when he arrives at that position.
hehehe... relative distances might be better inputs. "

yep, position was just for navigation purposes. other inputs include reference axis, relative positions of targets (on the output nodes, it can cycle through its targets) and the converted ''vector space'' axis of the target (its forward vector and its upward vector).

Things I''m mulling over at the moment is that at the moment I am just giving it goes like this, model physics over dt - do a neural cycle.

If layering forms in the network, it will take several cycles for any inputs to be decided upon.

quote:
I felt that nature was free to do what it liked and wondered if giving the maximun resources for mess and consfusion, give more opportunity for a GA to exploit.


that''s true, but nature also has had millions of years to find
the better working nns and it could touch lots of other
parameters, too. and these artificial nns are much simpler
than the real ones... but if you get it working, it''ll
be great!

i just have a personal problem with big nns, because the
messier and bigger the network the harder it is to find out
what it really does. it could work like a dream in my tests,
but some situations might make it to do uh... different
things. it''s like a bomb waiting some time before it decides
it wants to self destruct. not really something i want to
rely on.
Your neural network is almost certainly far too large. You are also being far too ambitious. Make your network much smaller and simplify your fitness function. Try to attain a much more modest goal before increasing the complexity.

Additionally, your starting population is way too small for chromosomes of that size.


My website (link in sig); the MacFly homepage found here:

http://www.kyb.tuebingen.mpg.de/bu/people/titus/MCFLY/mcfly.html

and the tutorials found at

http://www.ai-depot.com

may help provide some answers for you.



ai-junkie.com

[edited by - fup on January 7, 2003 1:51:03 PM]

This topic is closed to new replies.

Advertisement