Announcement
I have discovered that somebody saw fit to write a philosophy stub on Wikipedia on Desire Utilitarianism. I am honored by whomever did this. My policy shall be not to touch the Wikipedia site (which, as I understand it, is also a Wikipedia rule), but allow others express their interpretation through the edits they make.
Main Article
Yesterday, I presented some problems with game theory as an account of morality. Specifically, I argued that game theory cannot handle issues of rare but extreme benefit, anonymous defection, or differences in power. Today, I want to explain how desire utilitarianism can handle the same issues.
Game Theory Review
Imagine that we are playing a game. Each turn, we must each pick a crystal ball (C) or a dark ball (D). If we both pick (C), we both get 8 points. If we both pick D we get 2 points. If I pick C and you pick D, then I get 0 points and you get 10 points, and vice versa.
In any one round, if you pick C then I am better off picking D (10 points vs 8). If you pick D then I am still better off picking D (2 points vs 0 points). However, you are in the same situation. Therefore, you are also better off picking D, no matter what I choose. However, if we both pick 2, we get only 2 points. If we both pick C, we get 8 points. If we are both going to pick the same thing, it would be better to both pick C than to pick D. But, how do we get each other to pick C?
Game theory suggests solving this problem, at least in cases where we are going to play this game through multiple turns. One of the winning and stable strategies is called tit-for-tat.' This strategy says, “Pick C on the first turn. Then, for every subsequent turn, do what your opponent did on the previous turn. If he picked D, then you pick D. And if he picked C, then you pick C. He should learn soon enough that he (and you) is better off picking C every turn.”
Yesterday, I argued that the principle that sits at the root of game theory, “Do that which will get you the most points,” does not suggest a tit-for-tat strategy under several real-world assumptions. It suggests exploiting rare situations with an unusually high potential payoff. It also recommends anonymous deception where possible, and exploiting power relationships to force opponents to make moves that are not to his advantage.
Desire Utilitarianism: The Choice of Malleable Desires
Desire Utilitarianism would suggest an alternative strategy. “Use social forces to add a malleable desire that I choose C (or desire to cooperate) worth 3 points to both players.” Now, if you pick C, I should pick C (11 points vs 10). If you pick D then I should still pick C (3 points vs 2). The same is true for you. Under this moral impulse, we both get 11 points.
I need to explain the role of malleable desires in the formula above. Evolutionary biologists might want to investigate the possibility of a natural desire to cooperate that evolved due to its survival value. Their success or failure will have nothing directly to say about morality. They will only be able to study a genetic influence that mimics morality.
This is true in the same way that an ant, finding a dead moth and dragging it back to the colony, merely mimics altruism. It is no more an example of genuine altruism than an animal dropping manure on the grass is a case of altruism for providing a benefit to the grass. Morality itself requires an element of social and individual choice. Since we cannot choose our genes, we do not deserve either moral praise or moral condemnation for the result.
Choosing Desires
In our hypothetical game, you and I have an option to promote an institution that will use praise and condemnation to promote a universal desire to cooperate that gives cooperation 3 points of added value.
If we succeed, then I will have no need to worry that you will take advantage of an opportunity to engage in anonymous defection or abuse of power. Even if you have an opportunity to anonymously defect, to choose D under circumstances where I would not discover it, you will still not choose D, because C is what you want.
Nor will I need to fear what you would do if you were in a position of power. Certainly, that power would give you the opportunity to do whatever you want – to choose D without fear of retaliation. However, if you do not wish to choose D – if you should have a preference for C – then this possibility of choosing D is not a real threat.
Even the problem of rare but exceptional benefit can be mitigated by a desire for C, so long as the desire for C was higher than the value of whatever exceptional benefit the rare circumstance provided. It may still be the case that everyone has a price at which they can be bought, but with a sufficiently strong desire for C, that price may not be reachable in the real world.
We get a mirror set of effects for using social institutions to promote an aversion to D. Lowering the value of D by 3 points will also make it the case that potential anonymous defectors and potential power abusers will not want to choose D. This can be done by forming a direct aversion to D, or by associating the act of choosing D with negative emotions like guilt and shame.
Punishment and Reward
Another flaw with game theory as a model of morality is that it has a heavily distorted view of the role that punishment and reward play in society.
In game theory, I ‘punish’ you for picking D by picking D myself in the following turn. I will continue to pick D until you start to pick C again. As soon as I notice this, then I will return to picking C.
This is not a good model for reward and punishment. Reward and punishment are not decisions about what to do ‘the next time’ a similar event happens. They are decisions about what to do with respect to the current situation.
Specifically, before the next turn starts, I say to you, “Do not even think about picking D because, if you do, then I promise you that I will inflict three points of negative value on you.”
With this threat in mind, if I choose C, you are now still better off choosing C over D (8 points vs 7). If I choose D, then you are still better off choosing C over D (0 points vs -1). Either way, by means of a threat to do you harm in the current turn depending on your choice, I have coerced you into choosing C no matter what.
Reward is the mirror image of punishment, in the same way that virtue is the mirror image of vice in the section above. Instead of promising to inflict 3 units of harm on you if you should choose D, I can get the same effect by promising you 3 units of benefit if you should choose C. By offering you this reward, if I choose C, you are better off choosing C (11 points vs. 10). If I choose D then you are better off choosing C (3 points over 2).
The issue of reward seems to suggest a complication. Where am I going to get the 3 points? Am I to subtract it from the value of my payoff?
There is no reason to require this. Just as I can find ways to harm you that do not provide me with any direct benefit, there are ways in which I can benefit you without necessarily suffering a cost. Most people value praise – plaques, ribbons, cheers, and other rewards. We do not need to clutter the basic principles that I am trying to illustrate with such things. They can be saved for a future post.
Reward, Punishment, and Power
A doctrine of reward and punishment has some drawbacks that the doctrine of virtue and vice (promoting and inhibiting desires) does not have.
Reward and punishment does not solve the problem of anonymous defection. The anonymous defector escapes punishment. At best, the threat of punishment gives an incentive against choosing D where this can be known, even in the case that there is only one turn to be played.
Reward and punishment also do not solve the problem of unequal power. In the example that I gave above, I used the threat of punishment to coerce you into choosing C. Now, with your choice of C coerced, I still benefit from choosing D over C (10 points vs. 8).
The situation would be different if you and I were in a position of mutual coercion. If you could punish me for choosing D, just as I punish you for choosing D, we would both have an incentive to choose C, to our mutual benefit. However, as soon as one of us has power that the other does not have, then the one with power has the option of increasing his benefit by choosing D, significantly worsening the well being of the one without power.
One conclusion that we can draw from this is the value of a system of checks and balances, where two or more decision makers have the power to hold others in check. As soon as too much power gets concentrated in the hands of a single decision maker, the situation becomes dire for those who lack power.
Conclusion
The main purpose of this post is to illustrate that, on the question of modeling morality, choosing a strategy for winning a repeated prisoners’ dilemma where the value of outcomes is fixed does a poor job compared to choosing malleable desires before entering into a prisoners’ dilemma.
This is not to say that game theory has no value. The study of photosynthesis is not the same thing as the study of morality, but it is certainly a field worthy of investigation. Indeed, game theory can have important implications for ethics – more so than photosynthesis. It may help to determine which desires we have reason to promote and which we have reason to inhibit. Still, something that has implications for morality is not the same thing as morality.
3 comments:
Alonzo,
Your examples show that a single version of a single game model does not adequately represent the entire world of human interaction. However your presentation shows that these situations can be examined in light of game theory using variations in rules and outcomes.
Rather than proving game theory has nothing to do with morality, I think you have shown the moral theory of desire utilitarianism can be expressed in the language of game theory in a way that might be beneficial to both.
Atheist Observer
I would not object to that interpretation.
All I will say here is this is what I really expected from yesterdays post. Lets leave it at that. As always an interesting post.
Post a Comment