🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Artificial Emotion

Started by
10 comments, last by jefferytitan 12 years, 4 months ago
This is a very broad topic, but I have always been interested about human emotions and how they work and affect us.
My own little theory is that emotions are a short-cut to our logic. We have certain strong memories related with other emotions such as pain, joy, sadness, and etc. Whenever an event relates to us, we tend to jump to that emotion.

There are more than one state that these emotions are generated from.

State 1 Examples (Learned emotional reaction) [Outside force affects you] A -> ß <STRONGEST> Base case?:

1. One day and I had to sit around wondering why I was sad over some event. It certainly didn't affect me personally, but after about an hour of back-tracing I concluded that is was very close to a sensitive event I witnessed as a child that affected me negatively.

2. "The belt" from my father whenever I did something wrong as a child let me know that I should a) stop doing whatever it was and b) added the fear of getting caught and being in trouble again.

State 2 Examples (Self-placed emotional mappings) [You affect some outside force] ß -> A:

1. You're walking to work as another man walks by. Having never met him in your life, you begin to make analysis of him. He is minding his own business. You simply decide he is a good person (assumption) and map him to light-weight happiness. You wave at him, all the while smiling. The next day you see him again and he waves back first. Now this mapping is even stronger and you expect to see each other on both of your dull routines to work (just to make the walk more enjoyable).

2. A bird is outside, doing bird-like things. It is not doing anything to you (or even aware of your existence) but you are observing it. For some reason, you decide you love birds now. As time goes by, you begin to feed the bird and eventually grow an emotional bond to it (i.e. you map joy to bird).

State 3 Examples (Emotion by personal substitution) [Outside force affects some outside force; which in turn affects you] A -> A -> ß:

1. A fight breaks out. You do not hear about it until it appears on the news. On the screen, two young males are brawling viciously. By curiosity, you wonder how you would feel if you were in a fight (or even that particular fight). Never having been in a fight before, you imagine yourself in one based on this television experience alone. By replacing yourself with one of the fighters, you conclude it must be a terrifying but adrenaline-pumping experience.

2. You are at a restaurant. Two talkative people in the booth behind you are yapping about some sort of gossip. They are discussing a "friend" of theirs in a manner that would be quite hurtful if their friend was here. Or would it? You only came to such a conclusion by substituting yourself as their friend. Perhaps their real friend would laugh at such a conversation.

State 4 Examples (Self-reflective emotional mappings) [You affect yourself] ß -> ß

( Note: Let  = (ß->ß) and Ó = some other person. Â->Ó. For them, this is equivalent to A -> A -> ß; State 3) :



1. You have placed an arbitrarily amount of meaning to your success in some sporting event. Needless to say, you did not meet such par and shame yourself. Throughout the night, you call yourself names and think less of yourself now (or rather, you maintain the expectations even after they were not met). Through this, you hold self-pity (sadness mapped to yourself).



2. Conversely, you've done well in some sporting event and raise the bar even higher to the point where you believe you can achieve anything. ( mapping joy to yourself -> pride )



-----------------------------------------------------------------------------------------------



As seen in State 4, these states can be combined to create larger and more complex scenarios.



Another thing I have noticed (and more achievable via programming) is a Priority Stack. This stack holds various things but more importantly: goals. Naturally these goals affect you in some positive way. The best way I can explain what I mean is by using The Hitchhiker's Guide to the Galaxy: Mostly Harmless snippet summary:


"Colin[color=#000000][font=sans-serif]

is a small, round, melon-sized, flying security robot[/font]


[color=#000000][font=sans-serif]

which F[/font]ord Prefect[color=#000000][font=sans-serif]

enslaves to aid in his escape from the newly re-organized Guide[/font]


[color=#000000][font=sans-serif]

offices[/font][color=#000000][font=sans-serif]

. "Its motion sensors are the usual Sirius Cybernetics garbage." [/font]


[color=#000000][font=sans-serif]

Ford captures Colin by trapping the robot with his [/font]towel[color=#000000][font=sans-serif]

and re-wiring the robot's pleasure circuits,[/font]


[color=#000000][font=sans-serif]

inducing a cyber-ecstasy trip. [/font][color=#000000][font=sans-serif]

Ford uses Colin's cheerfulness to break into[/font]


[color=#000000][font=sans-serif]

the Guide's corporate accounting software in order to plant a Trojan Horse[/font]


[color=#000000][font=sans-serif]

module that will automatically pay anything billed to his InfiniDim Enterprises credit card."[/font]




Basically, when the security robot accomplished one of its main tasks, a circuit would be completed to the robot's brain and simulate joy. This would egg-on the robot to continue doing its job correctly (also, for Portal Fans, the "itch" in Portal 2...)





Note that these goals are what drive you to be you. These goals are affected by the states listed above. Those states are generated by the first-most state which give you virtual personality, opinions towards things, and initial weighted values.




[color=#000000][font=sans-serif]

Initial Experiences -> Generate assumptions (weighted values) -> Affect goals (you want to be happy) -> Your drive is to achieve said goals (be happy) with the topmost being the important goal. [/font]



So each piece of data on the Priority Stack would have some happy emotion attached to it as well as how important this priority is. The highest of importance would always be on top. i.e.:



Your trash has become a horrid tower of smell. This is affecting your gag reflexes therefore making you uncomfortable. Being uncomfortable reduces happiness (assuming we've represented happiness as a quantity). Originally, your top-most priority (the thing that makes you the happiest) was playing your game console. However, the amount of happiness given by the console itself would not be equal to or exceed the happiness you would achieve by taking out the trash.



Previous Stack:


1. Play console - +80


2. Other variables - +20



Stack:


1. Take out trash - (adds comfort) +60


2. Play console - +20


3. Other variables - +20



Suddenly your girlfriend calls and wants to come over sometime today but won't tell you when because "it's a surprise". Suddenly, priorities are altered. You want to appeal to your girlfriend because her presence affects you positively. You substitute how she might react if she saw the mess (her priorities over yours. Emotion by substitution. A -> A -> ß. In this case: How trash affects her affects you. ) Also, your house is a mess.



Stack:


1. Take out trash (immediately) - +90


2. Clean house - +5


3. Play console - +4


4. Other variables - +1



You've taken out the trash. This makes you happy. But there's still more to do.



Stack:


1. Clean house - +70


2. Play console - +20


3. Other variables - +10



You clean the house.



Stack:


1. Play console - +80


2. Other variables - +20



Girlfriend shows up.



Stack:


1. appease to the whim of girlfriend - +90


2. Play console - +5


3. Other variables - +5



Later. She finally goes home but it's late. Work is tomorrow. You'll need sleep.



Stack:


1. Sleep - +80


2. Other variables - +10



Note "Play console" was finally knocked off the stack for today. It was never accomplished, and so -10 to happiness, but it's not so much because you know you can play tomorrow after work



--------------------------------------------------------------------------------------------------------------



The point here was to simulate the Priority Stack and how it affected the overal day (and mood).



Finally my discussion and points are coming to a close. The only few things I can name off the top of my head that operates on any level like this is The Sims. Also, on a very lightweight scale would be those cheap Tamagotchis.



--------------------------------------------------------------------------------------------------------------



Where is the future of humanoid logic and emotion headed in the world of computers and programming? Also, the only two types of games I mentioned to have anything like this would be Tamagotchis and the Sims. Is there an A.I. field for this? Any programming attempts?



Broad topic, I know, but this sort of thing has always fascinated me. If you read this all the way through, thanks. I hope to get some good feedback!



*NOTE: I would also like to mention laughter. What is laughter? It's a form of happiness, but achieved through what exactly? I've narrowed it down to the fact of being introduced into some sort of truth (regardless of actual validity), but you don't laugh when someone hands you a spoon and say "spoon". However, there are other forms such as silliness. How do you explain that?

I'm that imaginary number in the parabola of life.
Advertisement
Additive utility theory a la The Sims and a variety of other autonomous agent-based games/simulations. It all depends on what axes you are defining. It's not really a new concept, it's just that many of our game designs don't require this sort of expressiveness.

I wish they would... and that's why I speak on the subject at GDC.

http://gameai.com/blog/?p=92
http://gameai.com/blog/?p=86

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

That's fantastic. I will keep my eye out for this. Also, I visited gameai's sub-pages and bought a few book they recommended. The one that caught my eye was the "Behavior Mathematics for Game AI". I know that this won't dive into emotion representation, but I felt it would give me a good step into the AI (for Games ) field and expand from there.

If anyone else has any information on the topic at hand, I'd greatly appreciate it.
I'm that imaginary number in the parabola of life.
Regarding laughter: the best (and most recent) theory I have heard is that laughter is a way of signalling to others that there is no danger despite a potentially dangerous situation.

For example:' a joke is only a good joke when the punch line is unexpected. The joke teller was deceptive, normally a bad thing, but you laugh because it's not actually threatening. You laugh when you see someone get hurt (football to the nuts, etc..) but only so long as the injury isn't serious.

Regarding laughter: the best (and most recent) theory I have heard is that laughter is a way of signalling to others that there is no danger despite a potentially dangerous situation.

For example:' a joke is only a good joke when the punch line is unexpected. The joke teller was deceptive, normally a bad thing, but you laugh because it's not actually threatening. You laugh when you see someone get hurt (football to the nuts, etc..) but only so long as the injury isn't serious.


I'd agree completely if it were not the fact that some find humor in other people's pain. There is definitely a threat involved. i.e. calling names, physical harm (to any extent), negative opinions towards some thing, etc. For example, harsh racist jokes found on the internet.
I'm that imaginary number in the parabola of life.
I think its being coined as Benign Violation Theory... I've seen it put as such: Grandpa is harmless... Erections can be threatening/disturbing... but Grandpa with an Erection is funny... Though, not all Benign Violations work... The key is the Violation has to be negate-able... if you cant negate it with something and make it benign, odds are it can never be funny...

[note to self... too many "..."... suspect too much coffee...]

That's fantastic. I will keep my eye out for this.


If you are going to the GDC AI Summit in a month, it will be hard to miss.

Also, I visited gameai's sub-pages and bought a few book they recommended. The one that caught my eye was the "Behavior Mathematics for Game AI".[/quote]

Coincidentally, the one I wrote.

I know that this won't dive into emotion representation,[/quote]

It does.

...but I felt it would give me a good step into the AI (for Games ) field and expand from there.[/quote]

It will.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"


[quote name='KanonBaum' timestamp='1328244521' post='4909006']
That's fantastic. I will keep my eye out for this.


If you are going to the GDC AI Summit in a month, it will be hard to miss.

Also, I visited gameai's sub-pages and bought a few book they recommended. The one that caught my eye was the "Behavior Mathematics for Game AI".[/quote]

Coincidentally, the one I wrote.

I know that this won't dive into emotion representation,[/quote]

It does.

...but I felt it would give me a good step into the AI (for Games ) field and expand from there.[/quote]

It will.
[/quote]

Heh. Well what do you know. I expect to get it today or by saturday. I'm looking forward to it. Especially if it goes into those representations.

@Net Gnome: I'll take a look into [color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

Benign Violation Theory.

[/font]

I'm that imaginary number in the parabola of life.
A good topic to explore for believability. Regarding laughter, one theory I heard was that it's misapplied joy-of-learning. e.g. someone sets up a situation where you realise your mental picture is horribly wrong, but they resolve it and you "learn" the solution before the discomfort gets to you, and the consequence of the wrong idea is minimal to you. This is different to being given a Sodoku puzzle then instantly being given the solution because you can't grasp the "problem" quickly (or the solution), and there's little emotional hook to it.
http://www.gamedev.net/topic/608309-theory-for-advanced-ai-in-games/

funny how this guy writes the same thing and is received 100% differently. Shows something about the ignorant baboons on this forum.

This topic is closed to new replies.

Advertisement