Friday, February 10, 2017

The Impossibility of Consequentialism

198 days until the first class.

Work has gotten exceptionally busy these days, and I am coming to resent its ability to cut into my ability to spend time studying moral philosophy.

I have been able to keep up with my readings . . . and I have continued to send comments to the professor. (And I have continued to fear that this is a poor way to do this.)

Nonetheless, in my most recent comments to the professor I opted to give a slightly detailed argument on the impossibility of consequentialism.

[I]We are all deontologists.

I am sorry for the length of this. Even with this, I fear that I cover some things far too lightly.
 
Joshua Greene, in “The Secret Joke of Kant’s Soul,” argues that deontology is merely a confabulation – a make-believe explanation – that attempts to account for moral judgments that are actually the product of evolved sentiments. Evolution has made us so that we are disposed to object to - for example - “up close and personal” assault. The deontological claim that this violates some right or duty or some aspect of human dignity is a made-up explanation to try to justify these evolved sentiments. But, in reality, they were nothing more than evolved sentiments.

Greene provided an analogy whereby a friend – who goes on multiple dates – offers a number of reasons for preferring some individuals over others such as sense of humor. However, hypothetically we notice that all of the people she likes are exceptionally tall (above 6’ 4”), and those she does not like are shorter. Since height is a better predictor of who she likes or dislikes, we draw the conclusion that she is really judging these people on the basis of height. The other issues she brings up – such as sense of humor – are mere confabulations.

Of course, he must assume that there is no correlation between a sense of humor and height.
 
Yet, as I see it, consequentialism cannot exist without at least a little deontology.

According to Greene, consequentialism involves the cognitive portions of the brain as the individual goes through the effort of evaluating the consequences of various actions. But what does one do with this answer? For example, let us assume that an agent goes through a cognitive process to determine the effects of various actions on the overall number of paperclips in the universe. Even after he computes that one action will produce more paperclips than the other, he still has to care about how many paperclips there are in the universe before this conclusion has any significance.

Admittedly, I am assuming that internalism with regard to reasons for action is true.
 
Now, let us invent an agent who cares about how much overall utility he creates. The more utility he creates, the more he cares. In this case, the agent has an option to do something that will produce 104 units of total utility. Let us further assume, for this agent, producing 104 units of utility has an importance of 4. I use this number only for illustrative purposes. The only thing that matters for the sake of this example is that higher numbers represent greater importance to the agent, and lower numbers represent less importance.

In this example, the agent cares about more than just overall utility. Our agent also has an aversion to personal pain. The more severe the pain, or the longer it lasts, the more important it is to that agent to avoid that pain.

Now, let us consider a couple of cases.
 
Case 1: Let us imagine that the 104 units of utility that the agent will produce has the following distribution: 105 units for everybody else, and -1 unit for the agent's pain. In this case, producing the utility has an importance of 4 while avoiding the pain, let us assume, has an importance of 1. Finding utility to be more important, our agent chooses to bring about utility.
 
Case 2: In this case, the action will also produce 104 units of utility. However, its distribution consists of 109 units of overall utility and -5 units due to the agent’s pain. The agent, in this case, assigns a value of 5 to avoiding this much pain. It is very important to him. It is so important, that the agent will sacrifice the opportunity to create 104 units of utility.

In the second case, how are we to judge this person who sacrificed overall utility for the sake of this competing interest?

The consequentialist response seems to require that we understand his aversion to pain as a temptation to do evil. Without it, he would have given his service to realizing the greater overall utility. However, the aversion to personal pain motivated him in this case to sacrifice this greater good for something that was personally important to him.

In fact, many of our interests other than the interest in overall utility will turn out to be temptations to do evil. With any other interest, we are likely to encounter situations where the importance of this interest will be greater than the importance of the utility one can create. Utility will find itself outweighed most often in cases where the increase in utility is small, but it can also happen where a particularly strong interest goes up against a larger amount of utility.

In contrast, the deontologist will tell us that it is perfectly acceptable to sacrifice overall utility under some circumstances - that other values have a greater priority. We may not subject a person to a great deal of pain, even if it would bring about some small increase in overall utility.
I think I can make this clearer by applying this to some of the "moral dilemmas" that Greene refers to where deontological thinking seems to override consequentialist thinking.
But, first, I wish to look at another sort of case.

At the end of the movie "Mad Max", Max handcuffs a man by his ankle to an overturned vehicle about to explode. He then gives the man a hacksaw. He tells the man that it would take him about ten minutes to cut through the handcuffs, but five minutes to cut through his leg.

It would be useful to have some empirical research to back this up, but I suspect that many people (like the villain in the movie) would be reluctant to cut through their leg, even to save their own life. It would simply be very difficult to do. A person who finds it difficult to cut through is own ankle even to save his own life would generally find it even more difficult to cut through his ankle for the sake of overall utility. Overall utility just is not important enough to most agents.

Now, I would like to compare this to some of the moral dilemmas that Greene mentions in his studies.

For example, there is the case of the mother who is reluctant to suffocate her child to keep the child from crying and drawing the attention of a murderous gang. The "pain" of suffocating one's own child would be like the pain of cutting off one's own foot. In fact, for many, it would be worse. Cutting through one's ankle would be easy by comparison. This is a situation like Case 2 above where an interest in something other than overall utility outweighs the interest in overall utility, motivating the agent to sacrifice overall utility for some other end.

Both types of pains can be explained by appeal to the same types of evolutionary forces. Greene wrote:

The rationale for distinguishing between personal and impersonal forms of harm is largely evolutionary. “Up close and personal” violence has been around for a very long time, reaching far back into our primate lineage (Wrangham & Peterson, 1996). Given that personal violence is evolutionarily ancient, predating our recently evolved human capacities for complex abstract reasoning, it should come as no surprise if we have innate responses to personal violence that are powerful but rather primitive. (P. 43)

The aversion to pain, or to cutting off one's own limb, or to suffocating one's own child is open to the same type of explanation.
 
However, Greene goes further and says that this is something more than a simple desire or aversion. Instead, he claims to be explaining a "moral sense" that something is good - or bad - to do. In the case of "up close and personal" battery, he wrote:

Nature doesn’t leave it to our powers of reasoning to figure out that ingesting fat and protein is conducive to our survival. Rather, it makes us hungry and gives us an intuitive sense that things like meat and fruit will satisfy our hunger. Nature doesn’t leave it to us to figure out that fellow humans are more suitable mates than baboons. Instead, it endows us with a psychology that makes certain humans strike us as appealing sexual partners, and makes baboons seem frightfully unappealing in this regard. And, finally, Nature doesn’t leave it to us to figure out that saving a drowning child is a good thing to do. Instead, it endows us with a powerful “moral sense” that compels us to engage in this sort of behavior (under the right circumstances). (P. 60)

Insofar as a "moral sense" that something is good or bad to do is different from a simple desire or aversion, Greene actually needs to do a little more work to give us an evolutionary explanation for this moral sense. In the same way that nature can motivate our behavior with a mere desire to eat without a "moral sense" that eating is a good thing to do, and a simple desire to have sex without a "moral sense" that having sex is a good thing to do, it can motivate us with to avoid suffocating our own children, to avoid committing battery against another person, and to rescue a drowning child without a "moral sense" that these are good things to do.

However, I do not think that would be necessary. Instead, Greene can give up his idea that we have evolved some type of moral sense and simply acknowledge that we have evolved to have certain preferences, and that those preferences might, in some circumstances, outweigh an agent's concern for overall utility. That, at this point, we must either brand all interests temptations to do evil, or acknowledge that there is a moral permission to pursue interests other than an interest in overall utility. There is a point at which any of us who value things other than overall utility will sacrifice overall utility for one of these other goods.

Since all of us have an interest in at least one thing other than overall utility, and since none of us think that morality requires that we view that interest as a temptation to do evil, it follows that we are all - at some point - deontologists. Sometimes we can sacrifice overall utility for the sake of something else that we value.[/I]

No comments: