I still am suffering from a flu, so my timing is somewhat off in writing this blog.
So, I’m going to take on a question that is easy for me, since it is in an area where my mind spends a lot of time. It comes from Kmeson’s follow-up question to yesterday’s post:
If I am in a position where I cannot influence the desires of a set of Agents, but can only turn a knob which greatly satisfies the desires of one at the expense of lesser desires of the other then how should I act?
In "desire utilitarianism" you have said that A has bad desires since they tend to thwart B's desires, but if I understand correctly then in my scenario B's desires are twice as bad since they require the thwarting of more of A's desires. I'm still left with the question of what to do with my knob.
First, the only type of person who will turn the knob, based entirely and solely on which option fulfills the most desires, is the agent that has only one desire – a desire to fulfill other desires. Any more complex agent – any human agent is going to be driven to turn the knob by a number of desires. In this example, you will, as a matter of necessity, turn the knob to that point that will fulfill the more and stronger of your own desires. That is the only thing that you can do.
Even though this is a highly abstract and contrived situation, you are human, and you have acquired your desires in the real world. Those desires have been molded – not to fit wildly exotic and contrived situations like the one you describe here. They have been molded to cause you to act in the real world, where people do not, in fact, have the desires of your two hypothetical agent.
So, you have acquired (I may assume) a desire for some sort of equal treatment among the agents. In the real world, we have to deal with laws of diminishing returns, for example. The law of diminishing returns says that the more you have of something, the less each unit is worth. One common example is money. We may assume that, if somebody were to hand you $100, you would spend it on that which you want most for $100. The next $100 would go to your second (and weaker) preference. When you get enough $100 bills, each additional bill may be so worthless to you that you simply roll it up and light it on fire.
The law of diminishing returns argues against giving one person ‘too much’ while another has ‘too little’. It says that the further we get away from equality, the more likely it is that the person who gets more is not gaining as much more as the person who gets less is losing.
This may not be true in your hypothetical case, but it is true in the real world. In that real world, we have reason to use social forces to pressure you into somebody who values some measure of equality. We want you to feel uncomfortable about a situation where one person gains everything and another loses everything. You have reason to want us to feel the same way.
No matter what we do to create a hypothetical moral story such as yours, we cannot eliminate the corruption that is inherent in the fact that the evaluators – you and I – have emotions that people in the real world have reason to want us to have. Those emotions will give us an emotional reaction to these hypothetical worlds that are relevant to this world, not to the hypothetical world we are evaluating.
So, what should you do with the knob? You should turn it to a point where there is some measure of equality between the two agents. We have reason to want you to be somebody who will turn the knob to a location with some measure of equality. That desire – that emotion – might not maximize utility in the exotic and contrived case that you described. However, that type of person will tend to fulfill the more and stronger of our very real world desires. That, ultimately, is what matters.
Now, as a person living in the real world, you should see the value of inhibiting a desire to inflict pain on others. If you were to discover that one of A’s desires is a desire to inflict pain, your reaction should be to say, “That desire does not count. I do not want to fulfill this desire to inflict pain.” You would be right to do so, and we have good reason to encourage you to do so.
Within the hypothetical example, this desire to inflict pain on others has very little consequence. It will only thwart one of B’s desires. However, in the real world, the desire to inflict pain has great consequence, and is a desire we have reason to inhibit to a great degree. So, we have reason to use social pressure to make you reluctant to turn the knob in favor of a desire to inflict pain. We have reason to want you to be the type of person who, when confronted with A’s desire to inflict pain, sees this as a reason to turn the knob against A, for that fact alone. Because somebody with such a desire, in the real world, is somebody that it is safer for us to have as our neighbor.
Once again, you can’t get away from the fact that people generally have reason to tune your desires to what is true of the real world. Once you have these real-world moral sentiments, they will affect your judgments even in highly contrived cases. What should you (want to) do in these highly contrived cases? You should (want to) do that act that a person whose desires tend to fulfill other real-world desires would do.
So, when opponents to utilitarian theories bring up the case of a doctor who has a chance to kill one healthy patient to save five sick patients, and point out that simple utility argues in favor of the murder, and that this is a problem for utilitarian theories, I have an answer.
We, the people who are applying our intuitions to this case, have very real reason to want the very real people we are asking to evaluate this case to be people who are averse to killing this person.
Why?
The answer is the same as the answer I gave in the sibling incest case. If I were to judge the act of killing this healthy patient to be permissible, this would imply that no person should have an aversion to killing this healthy patient. This means a weaker aversion to killing whenever the agent believes that more good can come of killing than harm. This weaker aversion to killing means more killing. A lot of that killing will not, in fact, be incases that produce the most good. A lot of that killing will only be in cases where people have convinced themselves (often wrongly) that more good can come from killing.
In order to better secure our live, health, and wellbeing, we are better off simply promoting an aversion to killing. We want our neighbors to be people who do not want to kill the healthy patient even to kill five. Yes, this means that in some highly contrived case that will almost never certainly happen in real life we will die where we might otherwise have lived, in a great many real-world cases we have a much greater chance of living where we otherwise would have been killed.
I want to repeat this point once more for the sake of any who may have missed it.
The desires that we should have – the desires that we apply even to highly contrived hypothetical cases – are those desires that tend to fulfill other real-world desires. We can speak hypothetically about the desires that people might have reason to promote in some hypothetical world. However, the question of whether or not we like that answer – whether we are comfortable with it or uncomfortable with it – depends essentially on the desires that work in the real world. If an aversion to torturing a child a good real-world desire, that desire is going to sit within us, even as we evaluate highly contrived and imaginary examples in which torturing a child produces the best consequences. We still have reason to demand that nobody want to torture that child.
No comments:
Post a Comment