🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Accelerated AI boards.

Started by
42 comments, last by GBGames 22 years, 5 months ago
quote: Original post by _Orion_
The basic problem comes down to the fact that graphics are about as low level as you can get. The world commands the graphics. AI''s command the world.

How does the AI command the world?

quote: While pathfinding and similarly often used algorithms could be very useful the level of complexity in game world design makes it difficult to generalize the elements in said game world. My point is this: game world design changes dramitically from game to game and I''m afraid that one pathfinding algorithm which works for one game wouldn''t work for another game due to major design differences.

You''re quite right that the pathfinding algorithm will depend on the structure of the world, including any or all optimizations possible on that algorithm. If you turn the question around, however, you see the mediating factor: if there were an AI acceleration system out there, might it not be worth using sans optimizations? Graphics programmers already bend their code to the peculiarities of OGL/DirectX data structures; is there a solid reason to believe that AI programmers are incapable of doing the same? Moreover, is it possible to offload even just the drudge work (pathfinding, minimaxing, etc) to a board and see a meaningful result?

Hopefully that''s comprehensible

ld
No Excuses
Advertisement
Or ask yourself this... if there were a game-AI board on the market that could implement algorithms x,y & z, do you think we would see more developers using the board for algorithms x,y & z in their games or would they persist in writing their own?

Timkin
If a new hardware AI accleration board came out, Timkin, then I think it''s safe to say that developers would build for it if and only if it caught on.

We tend to divide games into modules (e.g. graphics, physics, networking, AI, etc.). It''s important to realize which of these modules can be offloaded and which cannnot. While you''re thinking about this remember one thing: what good is the CPU after you''ve offloaded all its work to a bunch so-called accleration cards? Where''s the break even point?

Example 1:
The transmission of packets over a communication channel is far below the application layer in any standard networking model. The way you put bytes on a wire doesn''t change whether you''re downloading porn or playing Doom.

Example 2:
Conceptually, computer graphics is simple: draw stuff using a rectangular grid of pixels as your canvas. There are many readily available algorithms to accomplish this, and many of the basic features are standardized. If you give me an arbitrary 3D mesh, I can draw it on the screen with off-the-shelf code. Can I animate it? Of course not.Why not? </b> <br><br>Therein, I say, lies the problem with dedicated AI hardware. I will, however, finish off by saying that there is a difference between <i>hardware accelerated AI </i> and <i>hardware dedicated to AI </i> .
------When thirsty for life, drink whisky. When thirsty for water, add ice.
Hardware dedicated to AI sounds like just making use of a second CPU or some dedicated CPU to do the work while the main CPU does other things.
That would probably be the best way, since you have the freedom to implement whatever AI algorithm you want while still offloading the AI from the main CPU, allowing for other things to be done.
-------------------------GBGames' Blog: An Indie Game Developer's Somewhat Interesting ThoughtsStaff Reviewer for Game Tunnel
I think the building block for an AI board would be something with a good chunk of RAM and a very fast, highly math optimized, processor. Something that has an incredible ability to process massive numbers of floats and doubles and hardware optimization for matrix operations. Even if the hardware optimization was limited to a small number of NxN matrices it would be a step in the right direction.

If something like this were to come about I think it would go the direction that current graphics boards are going, the direction of programmable pipelines.

The real problem out there is that hardware is evolving just like everything else in the world... least important things first. This is because the least important things just don''t take as long to perfect.

Think of it like this : input->ai->world->graphics&sound is the order of importance in a game and the order of evolution is graphics&sound->world->ai->input.

Now lets equate that to the real world : input(senses)-> ai(brain)-> world(physical adaptations)-> graphics&sound(visual appearance of physical adaptations).

The beauty of this model is that AI is next in line to evolve. Graphics, Sound & Geometry have all reached the point where people seem to think in terms of "Well, the next chip will push more polys and handle more textures" and the rate of innovation is slipping.
Good call.
I remember when I had one of those N64 vs Playstation debates with some friends in high school, and I was all about Nintendo then.
One of my friend''s arguments was that now that N64 brought 3D to the consoles, they can''t do anything else besides that.
One, that is a weak argument FOR the Playstation.
Two, he does have a point about innovation.
After 3D, there really can''t be much more besides it. I mean, they can make virtual reality stuff, but they will have to make it look like real 3D instead of just putting a screen in front of your eyes.
As for sound, I don''t think there is much difference between the Sound Blaster Live and the Audigy cards. I mean, it''s nice that the board supports sound warping and such, but in general, I won''t notice a thing.
As for the AI board using highly optimized processors and such...I think that is about all you can do without locking in certain algorithms and preventing innovative AI from being developed.

How about this for graphics card "innovation"? Along with the regular features, how about voxel support in hardware? It wouldn''t necessarily change how the rest of the polygon based stuff is rendered, but it would allow for games that at least look different.
Off topic but still.
-------------------------GBGames' Blog: An Indie Game Developer's Somewhat Interesting ThoughtsStaff Reviewer for Game Tunnel
quote: Original post by Graylien
If a new hardware AI accleration board came out, Timkin, then I think it''s safe to say that developers would build for it if and only if it caught on.

That''s so very sadly true. In fact, you''d probably need a trickle-down system from commerical AI applications to consumer-level (as happened initially with most graphics board tools).


quote: It''s important to realize which of these modules can be offloaded and which cannnot. While you''re thinking about this remember one thing: what good is the CPU after you''ve offloaded all its work to a bunch so-called accleration cards? Where''s the break even point?

I think that''s an implied point in the thread. I, for example, would say that setup, input, device handling and OS coordination keep the CPU plenty busy. AI and Physics both have well-defined structures which can be mathematically described, and hardware implementations thereof are thus desirable. I think in a few years one or the other will probably start to appear on next-gen consoles, and shortly thereafter the other will likely follow. My bets are on AI as frontrunner.


I will, however, finish off by saying that there is a difference between hardware accelerated AI and hardware dedicated to AI .
I thought I agreed, but then I reread. What in fact is the difference? Regardless of the hardware, if it''s doing acceleration it''s very likely useful primarily for the task of acceleration.

interesting points, though.
ld
No Excuses
quote: Original post by liquiddark
AI and Physics both have well-defined structures which can be mathematically described, and hardware implementations thereof are thus desirable.


I could easily imagine a physics API and the associated hardware. It''s important, however, to note that the physics used in games is usually based, more or less, on rules that haven''t changed in a very long time (e.g. Newtonian Mechanics). The majority of the work would go into defining a proper API; I think that a standardized API would do wonders for game physics (the modern CPU is quite a powerful beast when used properly, but more on that below).

quote: Original post by liquiddark
I think in a few years one or the other will probably start to appear on next-gen consoles, and shortly thereafter the other will likely follow. My bets are on AI as frontrunner.


Why not just make dual CPU setups the norm and let things continue along their current path? This is much more practical, and by far more flexible, even for consoles.

quote: Original post by liquiddark
Regardless of the hardware, if it''s doing acceleration it''s very likely useful primarily for the task of acceleration.


The difference is a subtle one, but a very important one. Hardware dedicated to AI will occupy itself only with AI-related computations. Hardware accelerated AI requires an etched silicon chip which can do nothing but AI.

Let me offer up an example. Imagine a dual CPU machine. CPU1 takes care of all the "usual" stuff (i/o, task switching, etc.). CPU2, however, can be used in "exclusive" mode in the same way that DirectInput allows you to use input devices. Imagine now a generalized physics API that "knows" that you''ve dedicated CPU2 for its use (in the same way that Direct3D "knows" how to send stuff to your graphics card). This is dedicated general-purpose hardware . A chess game might dedicate CPU2 to AI while a fluid dynamics simulation program might use CPU2 to numerically solve the Navier-Stokes equations in real time. Programmers retain the flexibility the need to implement proper game AI, game players get better AI.

This, of course, is not the same as multithreading or parallel programming (i.e. 3D programming also uses two processors, one on your motherboard and the other the GPU on your video card).

Special add-on cards are expensive to make and the yield is not that high. I think that we will see multiple CPU''s become much more common before anyone decides to try a hardware implementation of some subset of AI algorithms that, in the end, can never be general enough.
------When thirsty for life, drink whisky. When thirsty for water, add ice.
Here''s my concern:

Graphics and sound are end products. Once the processing is done, it''s done and there is output; nothing gets sent back. With AI and possible physics there is going to have to be data sent back to the system. So I don''t have to worry about how I produce the graphics or the sound because the system isn''t the recepient of the output, the speakers and the video device are.

With physics and AI, you have to send the results of the function calls back to the caller. This has some nasty things associated with it. First you''ve just eaten a whole lot of extra bandwith on the bus. Secondly it requires that you standardize the system to which many people will disagree.

The thing is here is that we can easily accelerate the periphery. To aid I add a visual:

    Input      |Game Logic/Mechanics   /     \Graphics  Sound 


Inside the game mechanics there would be another block that included AI, physics, rules; things that are part of the actual game. If you look at it, the things that are accelerated are the things that interact with hardware, not software. AI is completely surrounded by the game logic; hell it is the game logic.

I''m not one for predicting, but I see graphics cards that do meshes and movement coming long before AI cards.

And on a note about secondary processors for doing AI work... If it isn''t cost effective for normal processing, it isn''t going to be effective for something that''s only going to affect a small percentage of computers.

Orion
quote: Original post by _Orion_
And on a note about secondary processors for doing AI work... If it isn''t cost effective for normal processing, it isn''t going to be effective for something that''s only going to affect a small percentage of computers.


Let me clarify because I don''t think I explained that part well. Dual CPU machines are wonderful when you''re multitasking (and who isn''t?). Dual CPU machines are not anything special any more, and it won''t be long before they are commonplace.

Only in your game would one CPU be dedicated, if you so desired, to AI or physics or whatever. I am not saying that we should slap on extra CPU''s and let game developers program towards them (that would be stupid and pointless).
------When thirsty for life, drink whisky. When thirsty for water, add ice.

This topic is closed to new replies.

Advertisement