🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Accelerated AI boards.

Started by
42 comments, last by GBGames 22 years, 5 months ago
I agree, it will probably be a reality. The question is, will it be something that will leave out certain things, such as voxel vs polygon? Or will someone find a way to accelerate just what needs accelerating while allowing it to be programmed easily? Too hard for me to think about right now, and I have very rudimentary knowledge of AI anyway.
-------------------------GBGames' Blog: An Indie Game Developer's Somewhat Interesting ThoughtsStaff Reviewer for Game Tunnel
Advertisement
The way I see it, the closest thing to "AI hardware" would be something allowing for massively parallel computation. Something like a card that is simply an array of CPUs, all of which have all the functionality of any other CPU (even if they are RISC instead of CISC; come to think of it, maybe that would be better). Even this system shows a bias towards certain types of AI... specifically NNs.
Here''s the deal:

Graphics cards work only because you can take a tecture and throw it into vram, so every time you want to draw said texture to the backbuffer, the cpu only has to pass the video card enough information to tell it which texture and where. It''s a proven technique, and it works like a charm. The less going through the card bus, the better.

What Carmack was probably trying to envision with his "polygons bad, solid models good" was the idea of putting not only the textures into vram, but also the geometry of the object being rendered, so all that''s going through the bus is a pointer to the model in vram and a dozen or so bytes about where and how it should be drawn. Basically, he was talking about geometry acceleration, and he was right, it came to be, but not in the form he was expecting.

Now, with AI, the ram that holds the "intelligent" entities has to be somewhere where it can quickly tell the video card where it is and how to draw itself. The memory doing the "thinking" for this entity also has to be somewhere where it can quickly be copied/compared to said entity so it knows what it''s next move is going to be, and what to do about it.

An AI accelerator card wouldn''t work without a unified memory architecture simply because there would be too much data flying back and forth across the bus to make it worth while.

Now, what we CAN take off the CPU, thanks to geometry acceleration and nvidia''s nFinite FX engine, is physics, which takes a ton off the CPU for anything with a physics engine comparable to a Quake game (read: even the shittiest of physics engines would benefit).

See, nFinite FX let''s you "program custom accelerated effects." That basically means you can write a function and put it on the video card to be run by the graphics processor. This is REALLY cool, because this function can manipulate anything in vram without touching the CPU. With geometry acceleration, the vertices that make up your world are on the card, so if you write a function that takes in a pointer to a model in VRAM, and information about a force acting on that model (vector, magnitude, spread, etc.) and put it on the video card, then all the CPU has to worry about is the couple dozen bytes that make up the parameters and return value of that function. Those parameters give the function all the information it needs (if the model in vram is detailed enough) to alter the geometry of that model to reflect whatever impact it just received. To be nice, the function could return a vector representing the changed trajectory of that model, and VOILA! your car has a realistically smashed and crumpled hood that looks like a painstakingly hand-crafted model even though it isn''t and is spinning away from whatever hit it on it''s center of gravity, offset by whatever friction with the road there may be.

Meanwhile, on the CPU side of things, the opponent driver AI can go through a "I just hit something" routine and a little model inside can run through a scripted grin/flip off animation so all the kids at home can say "WOW! The AI in this game ROCKS! The driver''s know when they hit something!"

(sorry, I''ve been under the opinion that game AI isn''t)
Maybe there could be some sort of hardware that accelerated logic operations, such as inferring facts from existing facts and rules. You''d send it little scripts in something resembling Prolog, if not Prolog itself, and every so often you could send it a query of some sort which it could answer in super-fast time Predicate calculus seems mathematically formal enough to be able to put onto hardware quite easily - the question is whether games could find much use for it (and thus, drive the market).

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost ]
quote: Original post by liquiddark
The fact that CG has converged doesn''t mean it''s found either the best or the only solution to any given (abstract) problem in the domain.


You are, of course, correct. Look at the KyroII video cards: who would have thought that tile-based rendering would make a comeback? I was a bit worried that people would read too much into my words .

It''s important to realize what has happened after we moved the CG computations away from the main CPU. It was a good idea when CPU clock speed was the limiting factor. Today''s bottleneck is now bandwidth . Your video card and your CPU have to communicate, as does your main memory (and so on). One problem too much traffic. The intuitive solution is to try to increase the size of the communication pipe.

But there is another far more serious problem. The clock speed of a modern CPU far exceeds that of the memory bus. The more dedicated hardware components you have, the more communication that needs to take place to keep things in synch. It''s the law of diminishing returns. The same principle holds for parallel computation: if using a dual processor machine doubled your performance then everyone would be using them by now (it would be very cost effective if it were so simple).

Here are two possible scenarios for the future:

1. Rapid decrease in communication overhead

Wouldn''t it be great if some major engineering advances reduced the communication overhead associated with multiple nodes (e.g. a CPU and a GPU, or the CPU''s in a supercomputer). If this were to happen, it would be great. Unfortunately, we''ve been waiting for this since the dawn of networking in the Xerox Palo Alto labs. Of course, maybe something like this is just around the proverbial corner ...

2. Integration of tightly coupled components

Perhaps sadly, this seems to be the way many researchers want to go. nVidia will still design the GPU hardware, Intel will still design the CPU core, and Micron the RAM ... but in the end someone will put it all on "one chip". Is this a disgusting, abominable hack? Yes. If I were to implement it tomorrow would it significantly increase your performance? Yes. Would we still have to wait for the bandwidth genie? No.

quote: Original post by liquiddark
Point being, you don''t need the *best* solution to take advantage of acceleration. You just need a solution that produces results which are acceptable.


I agree 100%. You''ve missed my point though. The field of computer graphics, for reasons scientific, commercial and historical, was a field ready and able to benefit from hardware acceleration. I do not think that the field of AI will ever be: no one can agree on anything.
------When thirsty for life, drink whisky. When thirsty for water, add ice.
quote: Original post by Anonymous Poster
What Carmack was probably trying to envision with his "polygons bad, solid models good" was(snip...)


He explained it concisely and clearly: He fully expected (expects?) that there would be a move towards an analog of a 3d pixel. Not a texture, not a hollow-body geometry pipe. A 3d pixel, with all of the good and bad things you incur thereby. Every single graphical object is shown in 3d glory via a cubic equivalent of rasterization, and "resolution" suddenly holds for 3d as well as 2d. We''re talking giga(tera?)bytes of ram and the time to work with it. He was in no way unclear.

I explain for informational purposes only.

ld
who wrote more, but realized he''d made his points and didn''t want to restate them, and so deleted the restatements.

No Excuses
quote: Original post by Graylien
The field of computer graphics, for reasons scientific, commercial and historical, was a field ready and able to benefit from hardware acceleration. I do not think that the field of AI will ever be: no one can agree on anything.


I agree that no one can agree, currently. We diverge at this point: I think that at some point, someone's going to design a general-purpose straight-up hammer rather than focus on their version of ballpeen, claw, pestle. This will likely happen *after* the first lab-bred Turing-complete AI is designing, but we can always hope for clouds to part early. Hell, I'm from Newfoundland. It's what we do .

Kylotan:
I think that's probably the other way to go. Having written a game in Prolog (didn't have fancy graphics, but then it only took 2 weeks), I'm willing to postulate that predicate calculus is plenty powerful and readily implementable. This is probably the first stab at a good, solid hammer candidate.

thank you .
ld



Edited by - liquiddark on January 18, 2002 12:43:08 AM
No Excuses
liquiddark I applaud your attention to detail. I had to cut my post short because it was getting too long ... that and I had to go to the pub (hell, I live in Montreal, that''s what we do).

In retrospect, I''m glad that I didn''t finish my post; you did it for me. Odd as it may seem I think we see eye to eye on this!

And I do think that one day we will see something akin to hardware accelerated Prolog , but not in the way you might think. I have a hunch that Formal Verification will become one of the next big things for language creators. There might emerge one a day a new general purpose language, part of which may indeed be based on the predicate calculus. This would be very interesting from a hardware design standpoint.
------When thirsty for life, drink whisky. When thirsty for water, add ice.
On the idea of using Predicate Calculus...

First Order logics just don''t cut it in AI. Certainly they can add some depth to current Game AI but ultimately, you wouldn''t want this sort of knowledge base in a complex world as they are too restrictive and inference can be extremely complex even in simple worlds. Since a game is a dynamic object, you would need as a minimum the Situation Calculus and then you have the Frame Problem to deal with in all its ugliness.

What I could envisage is a non-monotonic logic - like belief networks - being placed on a card. All processing can be handled in hardware on the card,: inference, querying, value of information, decision theoretic computations, etc. All that would need to be sent back and forth was information about which variables to query, or which observations needed to be added to the network.

The benefit of dynamic belief networks is that they are a general representation of Markov processes so you can handle all sorts of non-linear modelling for dynamic systems. I can envisage coupling a stochastic model of the environment to a physics engine for some very realistic simulations!

Anyway, just ramblings and ideas...

Cool thread, keep it up!

Timkin
I like the idea of dedicated AI boards. Though I did have a good laugh with a friend of mine about the idea of mandatory AI boards for games. With graphics you can turn down the level of detail and remove features to deal with lack of processing power. I can only imagine a little "intelligence" slider for your AI elements.

Humor aside, because true AI is so general it would probably be best to use a generic model for dedicated AI boards as opposed to algorithm specific AI board. The basic problem comes down to the fact that graphics are about as low level as you can get. The world commands the graphics. AI''s command the world. While pathfinding and similarly often used algorithms could be very useful the level of complexity in game world design makes it difficult to generalize the elements in said game world. My point is this: game world design changes dramitically from game to game and I''m afraid that one pathfinding algorithm which works for one game wouldn''t work for another game due to major design differences.

Orion

This topic is closed to new replies.

Advertisement