Advertisement

Turing meet Darwin, Darwin meet Turing

Started by December 02, 2002 08:14 PM
34 comments, last by capn_midnight 21 years, 7 months ago
Interesting how life resembles life. Coincidence or karma?

Check out these links:

http://www.nik.com.au/alice/

http://slashdot.org/article.pl?sid=02/11/17/1914241&mode=thread&tid=133

-Kirk
On the issue of a bot needing human input:

This sounds daunting because it implies we have to spend loads of time talking to a dumbass bot before it can start giving sensible responses.

What about a chatter bot with a "surfer" module that surfs the internet in the background while talking to other bots/humans. The search results could augment what they know based on what they find, or aleast prompt them to ask questions.

Basically: Start talking to a bot about X and meanwhile it kicks of a google search for X, asking the human/other bot for its thoughts on X.

Different breeds of bot draw different information from the same results, and ultimately humans act as a filter.

Maybe this would act to reduce the "drudge" of talking to a bot.

Tim.

[edited by - gljunkie on December 3, 2002 10:17:40 AM]
Advertisement
Talk bots don''t work with each other that well. They always end up in a why/why not rut.

Long separated by cruel fate, the star-crossed lovers raced across the grassy field toward each other like two freight trains, one having left York at 6:36 p.m. travelling at 55 mph, the other from Peterborough at 4:19p.m. at a speed of 35 mph.
With love, AnonymousPosterChild
well, I guess interbot interaction will not be the main focus, other than learning from the other bots failings or successes. What I mean by failure or success is that, if a human is talking to the bot, and is not fooled, then the bot will be "killed", and the other bots will see this action and will try to learn not to make the same mistakes. Maybe we could orginize it like a multiplayer battle game of some sort, where the object is to hunt down and kill all the reploids, like in Blade Runner.

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

quote: Original post by capn_midnight
well, I guess interbot interaction will not be the main focus, other than learning from the other bots failings or successes. What I mean by failure or success is that, if a human is talking to the bot, and is not fooled, then the bot will be "killed", and the other bots will see this action and will try to learn not to make the same mistakes. Maybe we could orginize it like a multiplayer battle game of some sort, where the object is to hunt down and kill all the reploids, like in Blade Runner.


Great idea! Only problem I see is, that you''d initially have to provide the bots with a _HUGE_ knowlegde base in order to have at least one candidate out of thousands that ''survives'' long enough to evaluate why the others got ''killed''.

Pat


How about looking at another angle. You have one bot, with one knowledge base, and the genetic/darwinism part applies to the routines used to form coherent speech...associations, etc. Limit it to just a few responses per round, after which the human scores the performance. The scores "feed" the modules being used. So you get a classic case of competition for food. Healthy modules get to "mate" more often. Unhealthy modules eventually die out. Throw in some random mutations and you''ve got evolution, baby.
Advertisement
I have and idea. How bout you first teach the bots the "Who''s on first" routine. If you can give a bot common phrases and a basic understanding of great works of literature then you can also let it come up with it''s own way of rephrasing it''s words based on rules of a given language.
Now I shall systematicly disimboule you with a .... Click here for Project Anime
Original post by capn_midnight :
"well, I guess interbot interaction will not be the main focus, other than learning from the other bots failings or successes. What I mean by failure or success is that, if a human is talking to the bot, and is not fooled, then the bot will be "killed", and the other bots will see this action and will try to learn not to make the same mistakes. Maybe we could orginize it like a multiplayer battle game of some sort, where the object is to hunt down and kill all the reploids, like in Blade Runner. "

How cool would that be to see the "smarter" bots slowly develop and imitate the HUMANS in this situation... that would be freaky/cool to see a BOT trying to trick us into thinking they are one of us, so as not to get "killed"... hahaha, sorry, this post has no substance, but its always fun to imagine
what if we do create the ultimate bot from this thing? What if the bot becomes so smart, that in order to fool humans even further it tries to become human and take on corporeal form? What if it starts hacking into bank accounts to fund it''s online purchases of parts to build its own robotic body? Did anyone see this on X-Files?
...sweet!

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

The current problem with this as I see it is that we focus too much on the "conversation" part and not enough on the "cognitive" part. Before bots can talk to each other they need to conceptualize "thoughts" and "knowledge". So far none of the bots that I know of have much A.I. in them. They just do some sentence analysis and come up with a reworked sentence. They work on the "look", not the "content". The communication is a tool that allows us to exchange thoughts. It is not intelligence.

The first thing to accomplish would be for the computer to be able to represent knowledge in a form that would enable it to "think", work with this knowledge. After this is accomplished then would be a good time to add communication skills to extract and add to the knowledge it would have.

We need to build the brain before we can work on the communication skills.

This topic is closed to new replies.

Advertisement