Tuesday, February 12, 2008

The Ultimatum Game

Last night I listened to the most recent episode of TED while I was on my exercise bike.

TED is one of my favorite web sites to visit. It contains video of presentations given at an annual conference in California where, at least according to their own promotions, 1000 of the best and brightest minds get together to discuss issues of technology, entertainment, and design. If anybody wants an example of “hope” without a religious context, this is the place to go – to see what real people are doing to make the world a better place.

This current episode had to do with issues of cooperation. Specifically, Howard Reingold argues that the internet is making possible a whole new culture of cooperation, which we can see exhibited in phenomena such as “open source” coding, Wikipedia, and other open, cooperative efforts.

On the issue of cooperation, Reingold brings up a couple of famous problems in game theory – problems that are supposed to highlight some paradoxes of rationality, where people who perform the ‘selfish’ act in a contrived situation ends up creating a situation in which he (and everybody else) is worse off.

He discussed the famous Prisoner’s Dilemma, of course, which I have discussed in the past.

He also discussed another game, an ultimatum game, which deserves our attention.

Before I describe the game, I would like to note that I listen to these types of cases through the filter of desire utilitarianism. A lot of these types of ‘puzzles’, I argue, only appear to be puzzles because people look primarily at actions themselves, and with that narrow perspective they cannot understand why the situation works out the way it does. If one looks at the issue from the perspective of desires, rather than actions, what appears to be a puzzle, actually makes sense.

The ultimatum game works like this: You take two people who do not know each other and you put them in separate rooms. You then go to one person and say, “I have a hundred dollars. I am going to give it to you, but you have to split it with the guy in the other room. I want you to tell me how much of this $100 you are willing to offer that other person. If he accepts the offer, then he will get what you offer and you get the rest. If he refuses, then neither of you get any money.”

According to standard assumptions of rationality, the first person should only need to offer $1 to the person in the second room. The person in the second room has a simple decision, whether to take $1 or to refuse it and get $0. Rationality seems to dictate that he take the $1.

However, in laboratory experiments, people who learn that the first person decided to make a split of $99 to $1, they often refuse the $1. They seek to ‘punish’ the first person by depriving that person of $99, even at a cost of $1 to themselves.

Furthermore, according to <>, people seem to know this, because the people in the first room often offer something closer to a 50-50 split, rather than thinking, “The person in the other room is rational, and will clearly choose to have $1 over having $0.”

Apparently, this is a puzzle.

However, I do not see the puzzle.

Let us take the principle that people act so as to fulfill their desires given their beliefs. Let us also propose that people have reason to promote in others those desires that tend to fulfill other desires, and to inhibit desires that tend to thwart other desires. A third proposition that I want to throw into this is that these cases cannot, in fact, be separated from the outside world. The subjects come into the experiment with desires molded in the outside world, and they will carry those same desires back into the outside world.

So, I’m the second person in this contest. Would it make sense for me to refuse the $1?

Of course it would. I have reason to promote those desires that tend to fulfill the desires of others, and inhibit those desires that tend to thwart the desires of others. One of the ways that I do so is through social conditioning. If I reward somebody who makes such an uneven split of money, he will take the story of this encounter into the outside world. He will teach people the benefits of this form of selfishness, and this will generate a culture in which there is even more selfishness – where I am even more likely to suffer at the hands of people willing to take more for themselves than they are willing to give to others.

Whereas, if I refuse this $1, then the other subject is going to take that story into the outside world. He is going to be a living example of a lesson that, “If you want something for yourself, you had better be ready to share it with others.” This will promote an aversion to selfishness and a desire for sharing, which will better fulfill my desires in the outside world.

In fact, other people in the world have reason to condemn me for taking the dollar, because in doing so I have promoted selfishness and inhibited sharing in the real world. The adverse affects of my action give them good reason to say to me that my relationships with them are at risk, because I did not have the good sense to promote sharing and inhibit selfishness.

There is no rationality in accepting the $1.

Okay, what if we can guarantee that everybody forgets about the event once the game is over, so that there is no story to take into the outside world. Thus, none of these adverse consequences will result. That cancels out this reason for refusing to generate $1.

I still have a reason to refuse to take the $1 . . . because I simply do not like the fact that the other person is offering such an unfair deal. In being subject to social conditioning, I should have been caused to have an aversion to unfairness such that, even though I value $1, I value a fair exchange even more. I simply do not want an unfair exchange, and am willing to pay $1 for the sake of avoiding a result in which another person benefits from selfishness.

This is true in the same way that, if somebody were to offer me $1, and say that if I accept the money I would have to endure a series of painful electric shocks, that I would have reason to refuse the $1. If they offer me $1, and require that I eat food that I do not like, I have reason to say, “Keep the money.” If they offer me $1, and offer me the opportunity to reward selfishness, it is not irrational for me to say, “I hate unfair deals even more than I hate that food that you offered me last time. You can still keep the money.”

It does not matter where my hatred of unfair deals comes from. Once I have that desire, then it becomes a part of who I am and one of my reasons for action. It doesn’t matter where my hatred for a certain type of food comes from, once I have that distaste for that food, that is enough to give me a reason to avoid eating it. I do not have to make up a story about how it might thwart my future desires to have news of my eating that food reach the outside world. I don’t like it – and that’s all I need to say on the matter.

The value of creating an actual aversion to unfair trade is that it will affect a person’s behavior even when they can act in secret. It prevents people from engaging in unfair trade, even when they can away with it – even when nobody knows about it.

The same applies to creating aversions to killing innocent people, rape, theft, violent destruction of property. If people have aversions to these things, then they have a reason not to perform these types of acts, even under situations where they could get away with it, and no story of their misdeed will ever reach the outside world.

The rationalist is puzzled by the fact that somebody will not take money even when he can get away with it – when he is absolutely certain that nobody is looking over his shoulder. Yet, for some reason the rationalist is not puzzled by the fact that an agent will not eat food that he doesn’t like, even when he can snitch some of that food without being caught. If he has a distaste for that type of food, it is not irrational to refuse to eat it. If he has a distaste for taking property that does not belong to him, then it is not irrational for him to refuse to take it.

When we add an examination of desires to our view of these particular ‘puzzles’, a lot of the puzzle just vanishes. Looking at these puzzles without including the perspective of desires is like examining planet Earth but ignoring the sun, and then asking, “Where does all of the energy for all of this activity come from? The only possible source of energy we can see (given our artificially narrow perspective) is the Earth’s core, but it is hardly enough to explain all of this activity.”

Indeed, it is not. We need to look away from the Earth and towards the sun to understand where all of this energy comes from. In the subject of morality and rationality, we need to look away from actions and towards desires – and, in particular, at the rationality of promoting and inhibiting certain desires – to understand where much of this behavior is coming from.

This subject of evaluating desires, determining which to promote and which to inhibit, is what desire utilitarianism is all about.

4 comments:

Anonymous said...

Hey, Alonzo, EXCELLENT post!

I'm reminded of T.H. Huxley's quote after reading Darwin's "Origin of Species".

"How extremely stupid [for me] not to have thought of that!"

Thank you

Anonymous said...

I am totally with you on this. I've never understood why it was considered irrational to reject the offer of $1.00 in this game. If you're only thinking short-term and are only thinking selfishly, then I guess it's irrational... but if you're thinking in terms of enforcing the social contract and wanting to reward/ encourage fair behavior and punish/ discourage unfair behavior, then it makes complete and utter sense.

Anonymous said...

I'm new to your blog, but got linked from the Carnival of the Godless.
Informative post! I wish I had time to take Game Theory... D:

Kristopher said...

this was a really nice post. thanks