As for AI, we won''t see dedicated hardward until we see a universally accepted (or perhaps two
![](wink.gif)
Timkin
Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!
quote: Original post by Ronin_54
So, basicly, we would just write normal C++ code, and designate it to the "ai-CPU"... Shouldnt be too hard, right?
quote: Original post by Anonymous Poster
Here''s the deal:
Graphics cards work only because you can take a tecture and throw it into vram, so every time you want to draw said texture to the backbuffer, the cpu only has to pass the video card enough information to tell it which texture and where. It''s a proven technique, and it works like a charm. The less going through the card bus, the better.
What Carmack was probably trying to envision with his "polygons bad, solid models good" was the idea of putting not only the textures into vram, but also the geometry of the object being rendered, so all that''s going through the bus is a pointer to the model in vram and a dozen or so bytes about where and how it should be drawn. Basically, he was talking about geometry acceleration, and he was right, it came to be, but not in the form he was expecting.
Now, with AI, the ram that holds the "intelligent" entities has to be somewhere where it can quickly tell the video card where it is and how to draw itself. The memory doing the "thinking" for this entity also has to be somewhere where it can quickly be copied/compared to said entity so it knows what it''s next move is going to be, and what to do about it.
An AI accelerator card wouldn''t work without a unified memory architecture simply because there would be too much data flying back and forth across the bus to make it worth while.
Now, what we CAN take off the CPU, thanks to geometry acceleration and nvidia''s nFinite FX engine, is physics, which takes a ton off the CPU for anything with a physics engine comparable to a Quake game (read: even the shittiest of physics engines would benefit).
See, nFinite FX let''s you "program custom accelerated effects." That basically means you can write a function and put it on the video card to be run by the graphics processor. This is REALLY cool, because this function can manipulate anything in vram without touching the CPU. With geometry acceleration, the vertices that make up your world are on the card, so if you write a function that takes in a pointer to a model in VRAM, and information about a force acting on that model (vector, magnitude, spread, etc.) and put it on the video card, then all the CPU has to worry about is the couple dozen bytes that make up the parameters and return value of that function. Those parameters give the function all the information it needs (if the model in vram is detailed enough) to alter the geometry of that model to reflect whatever impact it just received. To be nice, the function could return a vector representing the changed trajectory of that model, and VOILA! your car has a realistically smashed and crumpled hood that looks like a painstakingly hand-crafted model even though it isn''t and is spinning away from whatever hit it on it''s center of gravity, offset by whatever friction with the road there may be.
Meanwhile, on the CPU side of things, the opponent driver AI can go through a "I just hit something" routine and a little model inside can run through a scripted grin/flip off animation so all the kids at home can say "WOW! The AI in this game ROCKS! The driver''s know when they hit something!"
(sorry, I''ve been under the opinion that game AI isn''t)
quote: Original post by TerranFury
The piece of hardware I'm imagining is so versatile that it can do anything AI related. In order to do this, it needs to be able to perform hundreds of mathematical operations very quickly, and at the same time interface with the system closely enough so that it can quickly retrieve game world information and interact with the physics system.
quote: You see, I don't see any way that anything short of a CPU can be used as a multi-purpose "AI accelerator." ANNs are one thing. FSMs are another. Fuzzy Logic is a third. Game tree searches like MinMax are a third.