🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Accelerated AI boards.

Started by
42 comments, last by GBGames 22 years, 5 months ago
Actually, I wouldn''t be surprised if we eventually saw grphics cards take on some basic physics as well. You''d have a card that could manipulate a world model and display it. Then, instead of the CPU sending information on which object positions to update and how (rotate, translate, etc) you''d have the CPU passing info on just those exogenous events that are non-physics based. Think of it like Newton''s first law: Every object will continue to move with it''s current velocity unless acted upon by an external force. Unless otherwise influenced by an external factor, the GPU would propogate the motion of the object according to its physics engine, resolving all collisions, etc., and then display the results.

As for AI, we won''t see dedicated hardward until we see a universally accepted (or perhaps two ) API... and is that likely to happen?

Timkin
Advertisement
The reason why I think AI will beat physics to the punch is that consoles are likely to push the next generation of HW improvements, and graphics aren't giving the same kind of edge that they once were. AI is the next frontier for consoles, and I think the console manufacturers are seeing that in the sales of 3rd-party games with novel and intelligent AI on their systems.

If the consoles do it, then Joe developer is going to use it.

[edit]
In addition, people seem to think that every bit of AI has to be covered in order to make dedicated hardware useful. Not true. Even now there are software features in a lot of engines, although those features are fading out. Development starts with a skeleton, and builds a body of useful elements. Graphics chips and sound chips both followed the same path, and AI and physics chips aren't likely to be any different.

An etched CPU implementation of a particular feature makes, in general, a couple of binary orders of magnitude in difference over a general-purpose CPU. In the case of the fastest machines in the world, the differential is a couple of decimal OOM.
[/edit]

Or at least, that's how *I'm* seeing it.
ld

Edited by - liquiddark on January 23, 2002 11:16:51 AM
No Excuses
I think graphics board will indeed take on part of the physics... You will just load your model into the graphcis card, and then either put forces on it, or just plainly "teleport" it somewhere...

-Maarten Leeuwrik
"Some people when faced with the end of a journey simply decide to begin anew I guess."
Imagine an amazing piece of hardware. It can command armies in RTS games. It can aim railguns in Quake. It can dogfight in flight simulators. It can defeat grandmasters at chess. The piece of hardware I'm imagining is so versatile that it can do anything AI related. In order to do this, it needs to be able to perform hundreds of mathematical operations very quickly, and at the same time interface with the system closely enough so that it can quickly retrieve game world information and interact with the physics system.

The piece of hardware I just described is a CPU.

You see, I don't see any way that anything short of a CPU can be used as a multi-purpose "AI accelerator." ANNs are one thing. FSMs are another. Fuzzy Logic is a third. Game tree searches like MinMax are another. I fail to see how all of these things can be standardized and consolidated. Chess is not Quake; Quake is not Command and Conquer.

It makes only a little more sense to have the graphics card do physics.

Modern systems have the biggest problems in getting data from Point A to Point B - not in processing it. Having the graphics card do physics requires passing more information to the graphics card over the already crammed AGP bus.

Graphics and physics should remain seperate. How do you differentiate between the game world and the chat text you put on top? How do you create your own physics routines? By locking hardware "velocity buffers?" If its anything like locking vertex buffers its too slow for me. Finally, if you integrate graphics and physics, aren't you tying physics to the frame rate?

In short, it just doesn't make any sense to have a graphics card do physics. I wouldn't be surprised if nVidia tried something like that though.

One problem I'm having is that the graphics card is trying to take on too many roles. It is becoming a second CPU that sits in the AGP slot and has a hookup for a monitor in the back - an integrated system not unlike the very first computers.

I expect to see more integration. nVidia's nForce is a prime example. I would prefer, however, to see a new high-speed bus standard; an entirely new modular architecture that replaces the antiquated northbridge/southbridge system.



Edited by - TerranFury on January 25, 2002 5:48:40 PM
Hmm... Using a seperate CPU for AI might acutally be very suefull indeed... A good AI allready usually runs in its own thread, and makes its calculations independant from the main game timer (the only relation it has to the rest of the game is get the current data and store the new data)...

So, basicly, we would just write normal C++ code, and designate it to the "ai-CPU"... Shouldnt be too hard, right?

-Maarten Leeuwrik
"Some people when faced with the end of a journey simply decide to begin anew I guess."
quote: Original post by Ronin_54
So, basicly, we would just write normal C++ code, and designate it to the "ai-CPU"... Shouldnt be too hard, right?


I agree entirely. We should just use dual CPU systems. In order to make that realistic, though, we first need a better architecture.
First, let me be open and admit that I know very little about graphics cards and how they implement their techniques. I do know a hell of a lot about the maths behind it, but that''s not the issue.

On the issue of putting physics on a graphics card...

TerranFury, I agree that the bus is already overworked. I disagree though that it would be particularly more overworked if you added basic physics to the graphics card. I''m not talking about a whole physics engine for the card, just some basic stuff.

For example... let''s say you wanted to define a fluid surface, like a pond. A mesh is a good way of doing that. Now, if an object is dropped into the pond the current state of the mesh needs to be known by the CPU so that it can be updated with the affect of a deformation wave. At each point of the simulation the CPU needs to know the state of the mesh and the graphics card needs to know the state of the mesh. Obviously some information is being thrown back and forth between the CPU and GC for this purpose.

Why not add the physics of surface tension and waves to a class of meshes called a ''fluid mesh''. This class inherits from the basic mesh class but adds several methods for dealing with deformations of the mesh. Design a GC that has hardware acceleration of these particular methods

As I said, I''m not well versed in how GCs implement their hardware acceleration, so if this is not possible, then tell me and I''ll throw my idea in the ''stupid'' basket. Otherwise, what are the negatives of doing things this way?

Cheers,

Timkin
Hmmm... Actually, moving physics to the graphics card actually takes load *away* from the AGP bus. You no longer have to send the exact data about the location of every vertex, you just pump through all the data about the forces that work on objects.

-Maarten Leeuwrik
"Some people when faced with the end of a journey simply decide to begin anew I guess."
quote: Original post by Anonymous Poster
Here''s the deal:

Graphics cards work only because you can take a tecture and throw it into vram, so every time you want to draw said texture to the backbuffer, the cpu only has to pass the video card enough information to tell it which texture and where. It''s a proven technique, and it works like a charm. The less going through the card bus, the better.

What Carmack was probably trying to envision with his "polygons bad, solid models good" was the idea of putting not only the textures into vram, but also the geometry of the object being rendered, so all that''s going through the bus is a pointer to the model in vram and a dozen or so bytes about where and how it should be drawn. Basically, he was talking about geometry acceleration, and he was right, it came to be, but not in the form he was expecting.

Now, with AI, the ram that holds the "intelligent" entities has to be somewhere where it can quickly tell the video card where it is and how to draw itself. The memory doing the "thinking" for this entity also has to be somewhere where it can quickly be copied/compared to said entity so it knows what it''s next move is going to be, and what to do about it.

An AI accelerator card wouldn''t work without a unified memory architecture simply because there would be too much data flying back and forth across the bus to make it worth while.

Now, what we CAN take off the CPU, thanks to geometry acceleration and nvidia''s nFinite FX engine, is physics, which takes a ton off the CPU for anything with a physics engine comparable to a Quake game (read: even the shittiest of physics engines would benefit).

See, nFinite FX let''s you "program custom accelerated effects." That basically means you can write a function and put it on the video card to be run by the graphics processor. This is REALLY cool, because this function can manipulate anything in vram without touching the CPU. With geometry acceleration, the vertices that make up your world are on the card, so if you write a function that takes in a pointer to a model in VRAM, and information about a force acting on that model (vector, magnitude, spread, etc.) and put it on the video card, then all the CPU has to worry about is the couple dozen bytes that make up the parameters and return value of that function. Those parameters give the function all the information it needs (if the model in vram is detailed enough) to alter the geometry of that model to reflect whatever impact it just received. To be nice, the function could return a vector representing the changed trajectory of that model, and VOILA! your car has a realistically smashed and crumpled hood that looks like a painstakingly hand-crafted model even though it isn''t and is spinning away from whatever hit it on it''s center of gravity, offset by whatever friction with the road there may be.

Meanwhile, on the CPU side of things, the opponent driver AI can go through a "I just hit something" routine and a little model inside can run through a scripted grin/flip off animation so all the kids at home can say "WOW! The AI in this game ROCKS! The driver''s know when they hit something!"

(sorry, I''ve been under the opinion that game AI isn''t)


We''ve talked about this a lot at the GDC roundtables over the years (you can see writeups if you want at my site; click under "Soapbox" and you''ll find them under there) and the general conclusion is always the same. Developers think they''d be neat but (as others have mentioned here) aren''t sure what could be parameterized sufficiently to bother with a card. You''re much better off if you could have your own CPU, frankly...

Now an SDK on the other hand...that has possibilities.....




Ferretman

ferretman@gameai.com
www.gameai.com

From the High Mountains of Colorado

Ferretman
ferretman@gameai.com
From the High Mountains of Colorado
GameAI.Com

quote: Original post by TerranFury
The piece of hardware I'm imagining is so versatile that it can do anything AI related. In order to do this, it needs to be able to perform hundreds of mathematical operations very quickly, and at the same time interface with the system closely enough so that it can quickly retrieve game world information and interact with the physics system.


The hardware doesn't need to be nearly that versatile to have order of magnitude improvement. Graphics cards accrued functionality as time went by. Just about every game is going to need node-net functionality in some form, for example. LOS. Tree-searching (itself a node-net problem, of course). I'm sure you can think of more.

quote: You see, I don't see any way that anything short of a CPU can be used as a multi-purpose "AI accelerator." ANNs are one thing. FSMs are another. Fuzzy Logic is a third. Game tree searches like MinMax are a third.

Yet all of the above are mathematically equivalent. DirectX and OpenGL don't - or at least didn't for a long time - approach every problem from the same angle. As Ferretman said, an SDK has possibilities; maybe two or three would be better.

The difficulty that we're running into here, perhaps, is remembering that HW acceleration puts the onus on the developers themselves to use the deployed functionality correctly. It isn't the card's responsibility to give you every tool you ever wanted. The SDK writer has to learn to utilize the hardware efficiently for a particular type of AI, not the other way around. All the card vendor has to do is provide a set of tools for the guy to work with, and then keep a careful eye on the market for new features.

Slightly OT - speaking of AI sdks, has anyone heard from the fellows at OpenAI lately?

ld


Edited by - liquiddark on January 25, 2002 10:52:04 AM
No Excuses

This topic is closed to new replies.

Advertisement