This week, I am devoting my efforts to catching up questions raised through email and comments. One issue that came up in comments was the idea that “game theory” can tell us something about morality. Specifically, some people hold that the principles of morality can be understood in terms of the principles of a strategy that one would use in a particular type of game known commonly as a ‘prisoner’s dilemma.’
Game theory is a complex mathematical model that in some respects is said to provide a meaningful account of the relationship between morality and rationality. Rationality says, “Always do what is in your own best interests.” Morality says (or is often interpreted as saying something like), “Do what is in the best interests of others.” Game theory suggests some interesting ways in which these two apparently conflicting goals can merge.
The most common way of presenting game theory is to use the idea of two prisoners – you and somebody else whom you do not know. You are told the following:
If you confess to being a spy and agree to testify against the other, and he does not, then we will imprison you for 1 year, and execute the other. If he agrees to testify against you, and you do not confess, then we will execute you and free him in a year. If both of you confess, you will both get 10 years in prison. If neither confesses, you will both be imprisoned for 3 years.
If the other prisoner confesses, you are better off confessing – it is a difference between execution and 10 years in prison. If he does not confess, you are still better off confessing – it is a difference between 1 year in prison and 5 years. However, he has the same options you do. If he reasons the same way, he will confess, and you are both doomed to 10 years in prison. If he refuses to confess, and you also refuse, you can get away with 5 years. Clearly, 5 years is better than 10 years. Yet, it requires that both of you refuse to confess, when neither of you (taken individually) has reason to do so.
In trying to figure out how to handle this prisoner’s dilemma, some researchers made a game out of it. In this game, people submitted strategies to use in repeated prisoners’ dilemmas – cases where people were repeatedly thrown into these types of situations.
A particular strategy tended to be particularly stable – a strategy called ‘tit for tat’. The rules here were to cooperate on the first turn and, in each subsequent turn, do what your opponent did in the previous turn. Participants quickly learn the benefits of cooperation, and they do so.
Participants noticed certain similarities between these rules and a moral system – namely, the idea of ‘punishing’ somebody who ‘defects’ as a way of encouraging a system of mutual cooperation. Since then, researchers have thought that this holds the key to morality.
Of course, these reiterated prisoners’ dilemma games do not have death as one of the payoffs – since that would terminate the game at the first defection. They make sure that the payoff for cooperating when the other defects is the worst outcome, but also insist that it is not fatal.
As I see it, the fact that they have to impose this arbitrary limit should be seen as a cause for concern. In fact, the arbitrary and unrealistic limits that game theorists have to put on their games is only one of the problems that I find with the theory.
Altering the Payoffs
First, game theory takes all of the payoffs as fixed. It does not even ask the question, “What should we do if we have the capacity to alter the payoff before we even enter into this type of situation?”
For example, what if, before you and I even enter into this type of situation, we are able to alter each other’s desires such that both of us would rather die than contribute to the death of another person. Now, when we find ourselves in this type of a situation, the possibility that I might contribute to your death is the worst possible option. I can best avoid that option by not confessing. The same is true of you. We both refuse to confess, and thus end up harvesting the benefits of cooperation.
I am not talking about us making a promise not to confess if we should find ourselves in this type of situation. A promise, by itself, would not alter the results. However, if we back up the promise with an aversion to breaking promises – that I would rather die than break a promise that results in your death (and visa versa), then this would avoid the problem.
Desire utilitarianism looks at the prisoners’ dilemma and says that, if a community is facing these types of confrontations on a regular basis, then the best thing they can do is to promote a desire for cooperation and an aversion for defection. This raises the value of the outcomes of cooperation – changing the payoffs – so that true prisoners’ dilemmas become more and more rare.
What about the pattern, which we find in the tit-for-tat strategy – or following up cooperation with cooperation and defection with defection?
Please note that reward and punishment are not the same as deciding whether or not to cooperate or defect the next time that a similar situation comes up. A reward is a special compensation for what happened last time – a punishment is a special payment. We use reward and punishment as a way of promoting those desires that will make prisoners’ dilemmas less frequent.
Second, one of the assumptions that are used in these reiterated prisoners’ dilemmas – these games – where tit-for-tat strategy turns out to be so effective is that the payoff is always the same. However, in reality, the payoffs are not always the same. Some conflicts are more important than others. If we relax the rules of the game to capture this fact – if we vary the payoffs from one game to the next – I can easily come up with a strategy that will defeat tit-for-tat.
My strategy would be this: Play the tit-for-tat strategy, except when the payoff for defection is extraordinary high, then defect. Using this strategy, I could sucker the tit-for-tat player in to a habit of cooperation until the stakes are particularly high, than profit from a well-timed defection. The tit-for-tat strategist will then defect on the next turn. We will then enter into a pattern of oscillating defections. However, if the payoff on the important turn was high enough, then my gains would exceed all future losses.
My strategy would be particularly useful if, at the time of the big payoff, I arrange to kill off my tit-for-tat opponent so that the game ends on that turn. As I said, game theorists do not allow this option. Yet, in reality, this option is often available.
Third, game theory does not consider is the possibility of anonymous defection. The cost of defection in game theory comes from the fact that, if I defect, my opponent always finds out about it. My opponent then defects against me on the next turn. However, let us assume (as is often the case) that I can defect without anybody finding out about it? I have found a wallet and can take the money without anybody finding out about it. How does game theory handle this type of situation?
Game theory would seem to suggest that I take the money and run. In fact, it says that I should commit any crime where the change of getting away with it, and the payoff, make it worth the risk. It is not just that this would be the wise thing for me to do. It would be the moral thing for me to do. After all, the game theorist is telling us that what game theory says is wise, and what is moral, are the same thing.
This means that anonymous defection is perfectly moral.
Fourth, game theory presumes that the participants have approximately equal power – that one cannot coerce the choices of the other. Let’s introduce a difference in power, such that Player 1 can say to Player 2, “You had better make the cooperative choice every turn or I will force you to suffer the consequences.” The subordinate player lacks the ability to make the same threat.
When this happens, we are no longer in a prisoner’s dilemma. We are in a situation where the subordinate player is truly better off giving the cooperative option with each turn, and the dominant player is better off giving the defect option. The problem with game theory – or, more precisely, with the claim that game theory can give us some sort of morality – is that it says that, under these circumstances, the dominant player would have an obligation to exploit the subordinate player if it is profitable to do so.
Ultimately, game theory will have something important to say about morality. Game theory provides formulae for maximizing desire fulfillment in certain types of circumstances. As such, it will have implications for what it is good for us to desire.
However, it is one input among many. The idea that morality is nothing more than the rules of game theory has no merit.
Game theory uses the fundamental assumption that if an agent can actually get ahead by doing great harm to other people, then it is right and perhaps even morally obligatory for him to do so. Some game theory seems to suggest such a situation is not possible. Even if that is true, it is still the case that game theory says, in principle, if you should find yourself in such a situation, then by all means inflict as much harm as necessary to collect that reward.
This, alone, gives us irreparable split between morality and game theory.
Unfortunately, as the paragraphs above point out, the assumptions behind game theory morality not only say that a person has a moral right or even duty to great harm to others when it benefits him to do so. There are several likely scenarios that fit this description – scenarios where unusually great benefit, anonymity, or inequality in power can allow an agent to benefit in spite of, and perhaps because of, the harm he does to others.
Whatever morality happens to be, it is not going to be found in game theory.