Wednesday, May 23, 2007

Game Theory and Morals

This week, I am devoting my efforts to catching up questions raised through email and comments. One issue that came up in comments was the idea that “game theory” can tell us something about morality. Specifically, some people hold that the principles of morality can be understood in terms of the principles of a strategy that one would use in a particular type of game known commonly as a ‘prisoner’s dilemma.’

Game theory is a complex mathematical model that in some respects is said to provide a meaningful account of the relationship between morality and rationality. Rationality says, “Always do what is in your own best interests.” Morality says (or is often interpreted as saying something like), “Do what is in the best interests of others.” Game theory suggests some interesting ways in which these two apparently conflicting goals can merge.

The most common way of presenting game theory is to use the idea of two prisoners – you and somebody else whom you do not know. You are told the following:

If you confess to being a spy and agree to testify against the other, and he does not, then we will imprison you for 1 year, and execute the other. If he agrees to testify against you, and you do not confess, then we will execute you and free him in a year. If both of you confess, you will both get 10 years in prison. If neither confesses, you will both be imprisoned for 3 years.

If the other prisoner confesses, you are better off confessing – it is a difference between execution and 10 years in prison. If he does not confess, you are still better off confessing – it is a difference between 1 year in prison and 5 years. However, he has the same options you do. If he reasons the same way, he will confess, and you are both doomed to 10 years in prison. If he refuses to confess, and you also refuse, you can get away with 5 years. Clearly, 5 years is better than 10 years. Yet, it requires that both of you refuse to confess, when neither of you (taken individually) has reason to do so.

In trying to figure out how to handle this prisoner’s dilemma, some researchers made a game out of it. In this game, people submitted strategies to use in repeated prisoners’ dilemmas – cases where people were repeatedly thrown into these types of situations.

A particular strategy tended to be particularly stable – a strategy called ‘tit for tat’. The rules here were to cooperate on the first turn and, in each subsequent turn, do what your opponent did in the previous turn. Participants quickly learn the benefits of cooperation, and they do so.

Participants noticed certain similarities between these rules and a moral system – namely, the idea of ‘punishing’ somebody who ‘defects’ as a way of encouraging a system of mutual cooperation. Since then, researchers have thought that this holds the key to morality.

Of course, these reiterated prisoners’ dilemma games do not have death as one of the payoffs – since that would terminate the game at the first defection. They make sure that the payoff for cooperating when the other defects is the worst outcome, but also insist that it is not fatal.

As I see it, the fact that they have to impose this arbitrary limit should be seen as a cause for concern. In fact, the arbitrary and unrealistic limits that game theorists have to put on their games is only one of the problems that I find with the theory.

Altering the Payoffs

First, game theory takes all of the payoffs as fixed. It does not even ask the question, “What should we do if we have the capacity to alter the payoff before we even enter into this type of situation?”

For example, what if, before you and I even enter into this type of situation, we are able to alter each other’s desires such that both of us would rather die than contribute to the death of another person. Now, when we find ourselves in this type of a situation, the possibility that I might contribute to your death is the worst possible option. I can best avoid that option by not confessing. The same is true of you. We both refuse to confess, and thus end up harvesting the benefits of cooperation.

I am not talking about us making a promise not to confess if we should find ourselves in this type of situation. A promise, by itself, would not alter the results. However, if we back up the promise with an aversion to breaking promises – that I would rather die than break a promise that results in your death (and visa versa), then this would avoid the problem.

Desire utilitarianism looks at the prisoners’ dilemma and says that, if a community is facing these types of confrontations on a regular basis, then the best thing they can do is to promote a desire for cooperation and an aversion for defection. This raises the value of the outcomes of cooperation – changing the payoffs – so that true prisoners’ dilemmas become more and more rare.

What about the pattern, which we find in the tit-for-tat strategy – or following up cooperation with cooperation and defection with defection?

Please note that reward and punishment are not the same as deciding whether or not to cooperate or defect the next time that a similar situation comes up. A reward is a special compensation for what happened last time – a punishment is a special payment. We use reward and punishment as a way of promoting those desires that will make prisoners’ dilemmas less frequent.

Uneven Payoffs

Second, one of the assumptions that are used in these reiterated prisoners’ dilemmas – these games – where tit-for-tat strategy turns out to be so effective is that the payoff is always the same. However, in reality, the payoffs are not always the same. Some conflicts are more important than others. If we relax the rules of the game to capture this fact – if we vary the payoffs from one game to the next – I can easily come up with a strategy that will defeat tit-for-tat.

My strategy would be this: Play the tit-for-tat strategy, except when the payoff for defection is extraordinary high, then defect. Using this strategy, I could sucker the tit-for-tat player in to a habit of cooperation until the stakes are particularly high, than profit from a well-timed defection. The tit-for-tat strategist will then defect on the next turn. We will then enter into a pattern of oscillating defections. However, if the payoff on the important turn was high enough, then my gains would exceed all future losses.

My strategy would be particularly useful if, at the time of the big payoff, I arrange to kill off my tit-for-tat opponent so that the game ends on that turn. As I said, game theorists do not allow this option. Yet, in reality, this option is often available.

Anonymous Defection

Third, game theory does not consider is the possibility of anonymous defection. The cost of defection in game theory comes from the fact that, if I defect, my opponent always finds out about it. My opponent then defects against me on the next turn. However, let us assume (as is often the case) that I can defect without anybody finding out about it? I have found a wallet and can take the money without anybody finding out about it. How does game theory handle this type of situation?

Game theory would seem to suggest that I take the money and run. In fact, it says that I should commit any crime where the change of getting away with it, and the payoff, make it worth the risk. It is not just that this would be the wise thing for me to do. It would be the moral thing for me to do. After all, the game theorist is telling us that what game theory says is wise, and what is moral, are the same thing.

This means that anonymous defection is perfectly moral.

Power Relationships

Fourth, game theory presumes that the participants have approximately equal power – that one cannot coerce the choices of the other. Let’s introduce a difference in power, such that Player 1 can say to Player 2, “You had better make the cooperative choice every turn or I will force you to suffer the consequences.” The subordinate player lacks the ability to make the same threat.

When this happens, we are no longer in a prisoner’s dilemma. We are in a situation where the subordinate player is truly better off giving the cooperative option with each turn, and the dominant player is better off giving the defect option. The problem with game theory – or, more precisely, with the claim that game theory can give us some sort of morality – is that it says that, under these circumstances, the dominant player would have an obligation to exploit the subordinate player if it is profitable to do so.

Conclusion

Ultimately, game theory will have something important to say about morality. Game theory provides formulae for maximizing desire fulfillment in certain types of circumstances. As such, it will have implications for what it is good for us to desire.

However, it is one input among many. The idea that morality is nothing more than the rules of game theory has no merit.

Game theory uses the fundamental assumption that if an agent can actually get ahead by doing great harm to other people, then it is right and perhaps even morally obligatory for him to do so. Some game theory seems to suggest such a situation is not possible. Even if that is true, it is still the case that game theory says, in principle, if you should find yourself in such a situation, then by all means inflict as much harm as necessary to collect that reward.

This, alone, gives us irreparable split between morality and game theory.

Unfortunately, as the paragraphs above point out, the assumptions behind game theory morality not only say that a person has a moral right or even duty to great harm to others when it benefits him to do so. There are several likely scenarios that fit this description – scenarios where unusually great benefit, anonymity, or inequality in power can allow an agent to benefit in spite of, and perhaps because of, the harm he does to others.

Whatever morality happens to be, it is not going to be found in game theory.

18 comments:

  1. I have to say, at first glance anyway, that this is one of the weakest posts in your blog to date and I need to time to answer this but am now off on holiday for a day. Please also note that I totally agree with your last line.

    Sorry for being so enigmatic but will follow up tomorrow.

    ReplyDelete
  2. Martino

    Well, there is nothing any poster can do to prevent the fact that he will have a weakest post.

    However, in trying to anticipate your objection, all I did was take the game theory premise, "You morally ought to do that which would be the winning strategy in a game," and add some real-world possibilities (anonymity, unequal power, etc.).

    It's the assumed moral principle, "You morally ought to do that which would be the winning strategy in a game," that gets the game theorist into trouble.

    ReplyDelete
  3. I was under the impression that if there was a relationship between morality and game theory, it was that the “moral” choice was not that which maximized the outcome for an individual, but that set of choices which maximized the overall outcome for all players. Otherwise there is no difference between morality and unrestrained self interest. I’m not a game theory expert, but that is a position I’ve not seen advocated in any paper.

    ReplyDelete
  4. atheist observer

    Actually, game theory is meant to unite the two concepts. It is thought to be a way to provide a set of rules whereby rationally self-interested people will adopt a strategy (e.g., tit-for-tat) which is mutually beneficial and which overcomes obvious problems with 'simple' self-interest that looks only to the immediate benefit.

    In other words, it is meant to show that morality is rational.

    One way to defeat the proposition, "That which is rational is moral" is to show instances in which "that which is rational" is not moral.

    Three types of scenarios in which "that which is rational" is not "that which is moral" are instances of rare excessive benefit, anonymous defection, and unequal power.

    ReplyDelete
  5. By the way, I happen to agree that there is a relationship between that which is rational and that which is moral.

    Where 'game theory' misses out is the fact that we not only have the power to pick a strategy. We also have power to influence the payoffs. By promoting and inhibiting desires, we increase and decrease the payoff from different options, which raises the question, "How should we influence those payoffs?"

    ReplyDelete
  6. That is a good point alonzo. What I would do is take a little more setup when talking about game theory. If access to resources is the goal, because such access leads to healthy living, then Game Theory can be applied to morality. At that point, it becomes obvious what the best strategy is, although there is a shift from game theory defining morality to telling you what the moral thing to do would be.

    For example, if you look purely at resource availability, there are two ways to approach the problem. Is individual access, or group access more important? These would lead to different strategies (individual -> more deffections, group -> cooperation). Additionally, if resources is what is valuable, is it moral to gain resources individually at the cost of the group?

    ReplyDelete
  7. I read a book called "The Moral Animal" (mainly on evolutionary psychology) that mentioned tit-for-tat as a possible example of how morality evolved in humans. Seems game theory could be useful in this area, of showing how certain traits/strategies enhance chances for group success, hence causing that groups genes to be more common. From what I remember of book, it also touches on some of the scenarios you mentioned, since we may be evolved to appear moral (so as to be successful in tit-for-tat) rather than to actual be moral. So someone who appears to be a good tit-for-tat ally may be hiding their treachery for selfish reasons, or may only be a good ally until the short term gain outweighs the losses over time. thoughts?

    ReplyDelete
  8. WickedPlacebo

    The problem that this does not recognize is that, if "being a good ally until the short tem benefits exceed losses over time" is something that the theory recommends then either (1) this is a moral obligation, or (2) this is an account that mimics morality to some extent but still does not account for morality itself.

    If (2), then we still need an account of that which is being mimiced - it is something still distinct and separate from game-theory strategy.

    If (1), then, well, the conclusion that morality requires doing great harm to others when one can benefit from doing so is a reductio ad absurdim of game theory as a moral theory.

    ReplyDelete
  9. Alonzo,

    You say, “Three types of scenarios in which "that which is rational" is not "that which is moral" are instances of rare excessive benefit, anonymous defection, and unequal power.” Actually these are instances where the choice for your immediate personal benefit is not a moral choice. Your words imply self interest is the only rational choice, and morality is therefore by definition irrational in these instances.
    I think this is false because reason is only a tool to achieve an end. Morality in these situations is only irrational if “winning” the game is the only permissible end. Once you introduce other objectives into the situation, moral choices may become completely rational choices to achieve those ends.
    If morality were typically irrational I suspect there would be little interest in morality or ethics.

    ReplyDelete
  10. Hi Alonzo,

    My question is unrelated to this article. I was wondering if, when you say that "beliefs are either true or false" that includes ethical beliefs, i.e. "Killing people is wrong". Are such statements propositions, and therefore true or false?

    If this has been answered in a previous article feel free to direct me to it without comment.

    ReplyDelete
  11. Borofkin

    The short answer is that moral statements are propositions.

    I can give a long answer, full of caveats and qualifications. However, those are merely technical difficulties.

    ReplyDelete
  12. Whatever morality happens to be, it is not going to be found in game theory.
    We are agreed on this point. I apologise if my earlier comment was so open ended but this was only because your is one of the most interesting blogs around and one with such a consistent high standard of thought.

    One issue that came up in comments was the idea that “game theory” can tell us something about morality. Here I think your post failed to make an effective (or rather the most effective) argument for reasons to be shown below, particularly one major error in describing prisoners dilemma.

    I would say that Game Theory (GT) in general and the Prisoners Dilemma(PD) in particular are at best a form of mean-end analysis. DU uses the BDI theory of action, where desires are about ends and beliefs enables one to formulate strategies and tactics as the means to bring about (or fail to) those desired ends.

    The question is whether GT/PD is always, ever or anything in between the "best" tool to successfully aid bringing about those desired ends. That is given that anyone acts on the more and stronger of their desires, given their beliefs, whether being informed and using GT/BD will give them better or worse beliefs in achieving their desires. As such GT/BD can have nothing to with morality if DU is, which you hold to be the case, the best current model of morality.

    OK it would take too long to give an exhaustive analysis of your post so I will highlight just two points. The first is the error which concerns me:

    Of course, these reiterated prisoners’ dilemma games do not have death as one of the payoffs – since that would terminate the game at the first defection. They make sure that the payoff for cooperating when the other defects is the worst outcome, but also insist that it is not fatal.

    As I see it, the fact that they have to impose this arbitrary limit should be seen as a cause for concern. In fact, the arbitrary and unrealistic limits that game theorists have to put on their games is only one of the problems that I find with the theory.
    [My highlights in bold]

    I disagree this not at all arbitrary but crucial to understanding iterated PD. The issue is knowledge that there is a finite amount of games, even just one, can radically alter one's strategy. This is accounted for and dealt with by iterated PD. To say otherwise is to miss a key point of PD. In fact your misunderstanding, as I hold this, is irrelevant to the main thrust of your criticism but it weakens your argument to make such a point.

    Specifically, some people hold that the principles of morality can be understood in terms of the principles of a strategy that one would use in a particular type of game known commonly as a ‘prisoner’s dilemma.’

    I will stop trying to quote your post, we are of course talking about Rapapport's Tit-for-Tat, a surprising and interesting result in GT/PD. This could be translated into an adage such as "Be nice, then do to others, what they have done to you" comparable to other such adages such as the Golden Rule and the question is whether this strategy is a "good" moral guide.

    IMHO it would have been better if you had directly tackled this rather than deal with perfectly valid but IMHO long winded points regarding Tit-for-Tat with unequal payoffs and so on.

    As a descriptive model it can explain why strangers are (a) often rude, if not worse, in large communities - because the chances are they will never meet again and this is a PD that will not be repeated and so it is in their interests to defect, versus (b) the same strangers behaving differently, such as less rude, in small communities - because the chances are they will meet again and this is a PD that will be repeated and so it is in their interests to cooperate.

    It would be of interest to me to see how DU can contribute to altering scenarios such as (a).

    Regardless the real question is to the comparison of the adage created above to other such moral adages.

    ReplyDelete
  13. It's the assumed moral principle, "You morally ought to do that which would be the winning strategy in a game," that gets the game theorist into trouble.

    Yes it is but the point is critically far more subtle in Tit-for-Tat, since Tit-for-Tat never wins a game, it can at best draw but overall it accumulates more points over many matches (a set of games) than other strategies. It is a purported justification for how a certain form of "altruism" (regarded as a moral model) could work. This is what I think you failed to tackle in this post.

    ReplyDelete
  14. martino

    I think that I can address the major concern by looking at your "scenario (a)" - the fact that people are more rude in large cities.

    If we are looking at game theory as a moral theory, than it is saying more than that people living in larger communities are more rude.

    We would have to interpret it as saying that people living in larger communities morally ought (are obligated) to be more rude. This is what follows from saying that this is a moral theory.

    This, to me, appears to be yet another break between game theory and morality.

    I have addressed how DU would look at similar issues in 'tomorrow's' post (relative to this one).

    ReplyDelete
  15. I did say this was a "descriptive" explanation of rudeness and this does go back to my first point that GT cannot dictate nor indicate what one ought to desire. I think my or our short analysis of scenario a/b rudeness is an elegant demonstration of why GT cannot be a moral theory, I would say regardless of any specific moral theory. I wonder if, because you are focused on applying and developing DU, that you missed this more general criticism of GT as morality and that sometimes DU is not required to show the shortcomings of other purported moral theories?

    ReplyDelete
  16. I think the problem with how game theory is often discussed centres around the definition of utility.

    Utility is an abstract measurement of how well things turned out according to your preferences. You should prefer the high utility outcome because, by definition, you prefer it.

    The moral issues of whether you want to hurt or help other people are encapsulated in the utility value. If you are in a position to exploit the other player, but you don't want to for ethical reasons, the "exploit" outcome should provide less utility for you to reflect this.

    I agree that there are aspects of morality that are not, and probably can't be, fully explored through game theory, but I believe the conclusion that game theory endorses selfish behaviour is a misconstruing of what utility actually represents.

    ReplyDelete
  17. Alonzo Fye said:
    One way to defeat the proposition, "That which is rational is moral" is to show instances in which "that which is rational" is not moral."

    Only if you assume the conclusion of what is moral. But let's assume our moral intuitions are largely correct for the moment.

    In fact, the iterated prisoner's dilemma defeats all scenarios you posited of unequal power by the simple fact that power is transient. When you inevitably fall from power, you want the next regime to treat you fairly, so you should treat them fairly while you hold the reigns (tit-for-tat).

    Unequal power scenarios only pose problems when treated in isolation like the original Prisoner's Dilemma. The real world doesn't work like that, which is why our naturally selected moral intuitions rebel against the "rational", and wrong, results of the Prisoner's Dilemma.

    ReplyDelete
  18. I think your post has some serious flaws it. But most importantly, it ignores the difference between zero-sum games and non-zero sum games. Morality comes about in game theory within the context of non-zero sum games, where what is best for the individual is in fact what is best for the group. The class example of the 'tragedy of the commons' is a good example.

    ReplyDelete