Tuesday, August 30, 2016

A Desire that a Child be Tortured

In my last post, I gave reasons for rejecting moral intuitions as a reliable source of moral knowledge.

This, then, raises the question: How would I respond to a case like this, from Shmuel Warshell:

Let's say there's a world where 1000 people desire that a child be tortured. Desirism says that we should act on the desires that someone with good desires would have and a good desire is that which tends to fulfill other desires and not thwart desires. Now the desire to torture the child fulfills many desires- the desires of all the sadists. You could then say that the desire to torture the child is a bad desire but that would seem to lead to an infinite regress.

Rather than respond just to this specific example, I would like to take this opportunity to explain how I would generally handle cases like this.

First, there is a technical issue to tend to just to make sure that it does not cause future problems.

Technically, desirism does not say "we should act on the desires that someone with good desires would have done." In fact, I do not even know how it is possible to act on a specific desire. I can choose to give money to somebody in need, but I do not know how to give money to somebody in need out of a desire to help others, as opposed to doing so out of a desire to do the right thing, which is different from acting on a desire to impress others with my generosity. So, I want to make sure that desirism is not interpreted as arguing for "should act on the desire". Instead, it says that the right act is the act that a person with good desires would perform. The actual motivation of the agent who does what a person with good desires would do is not relevant.

In addition, it is the case that desirism says that a good desire is a desire that tends to fulfill other desires - in virtue of which the people with those other desires have reasons to promote that desire universally using the social tools of reward/praise and punishment/condemnation.

Second, we need to make sure that we define the situation precisely.

The case mentions a "desire that a child be tortured". Is this a specific child - Fred? Is this a desire that one child be tortured, without regard to which one? Is this a desire for the torturing of children generally?

One distinction that often trips people up is the distinction between "a desire to" and "a desire that". In our current case, we can distinguish between a desire to torture a child and a desire that a child be tortured. A "desire to" can generally be reduced to a "desire that I". So, a desire to torture a child is a "desire that I torture a child". In contrast, a desire that a child be tortured is a desire that does not care who carries out the deed, as long as the child gets tortured.

In Warshell's question above, I am instructed to explicitly deal with a "desire that" - specifically, a desire that a child be tortured. Such a desire can be fulfilled by a state in which somebody other than the agent does the torturing.

I am not raising an objection to Warshell's example here. I am simply taking this opportunity to specify a distinction that has, at other times, caused problems as authors slipped from "desire to" to "desire that" and back again without noticing.

In this case, I also want to look at the concept of "torture". Torture is a value-laden term. In other words, it has the thwarting of desires built right into the meaning of the term. Something will not count as torture unless it thwarts desires. In fact, it needs to thwart some very strong and stable desires; a mild thwarting of a weak desire is not torture. In other words, a desire to torture (or a desire that somebody be tortured) is a desire to thwart some very strong and stable desires (or to have some very strong and stable desires thwarted). A desire that a child be tortured is a desire that a child suffer the thwarting of some of its strongest desires.

I argue that moral intuition is a poor source of moral knowledge, but linguistic intuitions are reliable when it comes to the meaning of terms. We cannot use moral intuitions to reliably determine the morality of slavery - the vast majority of humans who have lived have found it acceptable. However, we can use linguistic intuitions to tell us that morality is concerned with universal principles or attitudes. Consequently, insofar as the torture of a child is a moral concern, we are dealing with the relationship between the torture of a child and universalized desires.

Third, once we properly understand the concepts involved in a case, we can look at the details.

If we are going to look at the morality of an act of torturing a child, we have to look at whether the desires that would motivate an agent to torture a child are desires that people generally have reason to make universal throughout the community.

One of the ways to ask this question is to ask, "If a community had no desire to torture a child, do they have reasons to create such a desire and to make it universal within the community?"

Another way in which I have approached this question is to assume that there is a knob. Turn the knob to the right and the desire becomes stronger and more widespread within the community; turn it to the left and the desire becomes weaker and less common. We then ask what reasons people generally have to support turning the knob to the right or to the left.

At this point, it seems we are asked to make another stipulation in this imaginary case. It seems that we are being asked to assume that the desire that a child be tortured is widespread and immutable. Not only is it very common, but it cannot be changed. If it could be changed, then we can ask about the reasons for turning the knob controlling the desire that a child be tortured to the left or the right.

On this measure, there are many reasons to turn the desire that a child be tortured down - or even off. There is the desire-thwarting of those being tortured and the desire-thwarting of those who care about those being tortured.

There are no similar reasons for turning the knob to the right. Remember, we are looking at the question of whether the desire that a child be tortured fulfills other desires - not at whether the act of torturing children fulfills other desires. If the desire that a child be tortured increases - this itself does not bring about the fulfillment of other desires.

Furthermore, people generally have many and strong reasons to promote a general widespread aversion to having the desires of others thwarted. This is because each of us is an "other" relative to everybody else. A tolerance towards the torturing of a child requires a tolerance towards the thwarting of the desires of others, and nobody has reason to seek out a community filled with people indifferent to the suffering of others. This aversion to it being the case that the desires of others are thwarted gives people a reason to turn the knob governing the desire that a child be tortured to the left - to turn it all the way off.

However, it seems that we are being asked to examine a case in which this is not possible.

At this point, we can confidently report that we are not dealing with human beings. In fact, at this point we might want to block the hazard of unconsciously importing assumptions about human nature into this example by imagining that we are talking about a race of creatures on an alien planet which have six legs and six eyes, scales, stand 4' tall, and have an entirely different evolutionary history from humans. It would have had to have been a history that would have fixed a desire that children be tortured. Perhaps the torturing of children at a young age released hormones that, ultimately, promoted genetic fitness - that cause sexual maturity, for example. Or perhaps torture released hormones that made the child immune to certain fatal diseases. Science fiction writers could have a field day inventing such a race and examining its implications.

Note that, to fit our description, this would not be a case in which adults tortured children in order to cause sexual maturity or prevent disease. Rather, this is a community where the effects of torture in causing maturity or preventing disease brought about an evolutionary change where adults desired to torture children - or a desire that children be tortured.

Note that, even here, it would be a community where people also have many and strong reasons to promote aversions to having the desires of others thwarted (because they are the "others" and they have a reason to have everybody else concerned about the thwarting of their desires). This would be a community that is both, at the same time, averse to the thwarting of the desires of others and having a desire that children be tortured. It would be a conflicted society, to say the least.

This would still be a community that would have reason to reduce the desire to torture children if it could. However, we are being required to assume that they cannot. Because torture is intrinsically desire-thwarting (that is, desire-thwarting is built into the definition of the term), there are necessarily reasons for turning the dial on this desire down, and few if any reasons to turn it up. It is still not a good desire - it is a bad but unmalleable desire.

It is a desire that, I suspect, our imaginary community will come to have reason to regard as an illness. If reward and punishment cannot "turn down" this desire, community members would have many and strong reasons to look to medicine to do what morality cannot.

Yet, even here there would be room for a moral component - an obligation to seek treatment and to stick to any treatment regime that prevented people from acting on a desire that children be tortured. Remember, this is being motivated by a general aversion to having the desires of others thwarted that everybody has reason to promote - an aversion that would translate into an aversion to acting on this desire to torture children.

I cannot imagine a case in which people will not have reason to promote a general aversion to the thwarting the desires of others. This does not mean that such an aversion will always win out - that there will never, at the same time, be reason to thwart the desires of others such as the desires of criminals or the desires of individuals with non-malleable desires to harm others. However, this will not argue against the reasons that people generally have to promote a universal aversion to the thwarting the desires of others.

9 comments:

  1. It seems to me that Desirism essentially reduces to regular Preference Utilitarianism.

    I believe that, aside from the jargon, the concept of a Preference is identical to that of a Desire.

    So, the question then becomes, if I'm wrong, is there a conceivable question for which Desire Utilitarianism comes up with a different answer than Preference Utilitarianism?

    ReplyDelete
  2. There are two primary distinctions in utilitarian theory.

    One distinction concerns the object of evaluation. Here, we have act-utilitarianism (the primary object of moral evaluation are actions) and rule-utilitarianism (the primary object of moral evaluation are rules and actions are evaluated by conforming to the best rules).

    The other distinction concerns what is to be maximized. Common versions here nominate pleasure and the absence of pain, happiness over unhappiness, and eudaemonea.

    Preference utilitarianism is most commonly understood as an act-utilitarian theory that maximizes preference satisfaction: the right act is that act that satisfies the most preferences. Peter Singer, for example, argues that we should be impartial towards the preferences of others - that satisfying more preferences of strangers is better than satisfying the fewer preferences of family members or close friends.

    Desirism rejects the thesis that the right act is the act that fulfills the most desires. Rather, the right act is the act that a good person would perform, where a good person is a person with good desires and good desires are desires that tend to fulfill other desires. If we took rule utilitarianism and modified it so that the proper object of moral evaluation are desires rather than rules, and desires are evaluated according to their consequences, this would be closer to Desirism.

    Still, it turns out that Desirism is not a utilitarian theory. Desirism denies the existence of an intrinsic good to be maximized. Even desire fulfillment itself lacks intrinsic value and is not to be maximized. Instead, desire fulfillment - like everything else - is good only to the degree and only for whom there is a desire for desire fulfillment.

    When desires are evaluated relative to other desires, the outcome is not desire maximization. It is a harmony of desires where different desires fit comfortably together. One person's interest in making tools fits with another person's interest in hunting which fits in with another person's interest in building homes which fits with another person's interest in making clothes - all while a flute player makes music in the background.

    A utilitarian - seeking to maximize desire fulfillment - would say that two such villages is better than one because there is more overall utility. Desirism looks for reasons to bring the second village into existence from among the current desires. If there is none, then bringing the second village into existence has no value.

    Utilitarianism cannot handle Robert Nozick's Utility Monster - a hypothetical member of a community who gets huge amount of utility from the suffering of others. Utilitarianism has to say that it is best to bring the utility monster into existence. Desirism, on the other hand, asked whether anybody has a reason to bring a utility monster into existence - and notes that, almost by definition - they do not.

    ReplyDelete
  3. >Rather, the right act is the act that a good person would perform, where a good person is a person with good desires and good desires are desires that tend to fulfill other desires

    So, in desirism, a right act is that which a person with desires that tend to fulfill other desires would perform.

    >A utilitarian - seeking to maximize desire fulfillment - would say that two such villages is better than one because there is more overall utility. Desirism looks for reasons to bring the second village into existence from among the current desires. If there is none, then bringing the second village into existence has no value.

    Thank you, this is very clear.

    >Utilitarianism cannot handle Robert Nozick's Utility Monster - a hypothetical member of a community who gets huge amount of utility from the suffering of others.

    Sure it can. It answers that the utility monster should be created (as long as it is for some reason impossible to make a similar utility monster, or another set of agents, that could reap just as much utility without the suffering of others).

    It would be a poor equilibrium, but if the only options were (create torturous utility monster) or (not create any utility monster) than the first is the better option. The only reason people believe that this defeats utilitarianism is that it conflicts with our moral intuitions, which both you and I agree are less than reliable.

    To make the point more intuitive, consider a single human who must dissolve with acid a few thousand bacteria in order to synthesize a drug that will save their life. The drug gives them much more utility than the sum of the disutility of the bacteria, and thus the situation is equivalent to Nozick's Utility Monster.

    Yet our intuition in the first situation is the opposite of the intuition in the second situation.

    ReplyDelete
  4. @Pngwn

    Why should we value utility? If there are no intrinsic values (not desire fulfillment, not utility, not anything), what makes making the Utility Monster a "better" option? While I agree that based on the Utilitarian model, it is the correct choice, but what reason that exists or could exist to prefer Utilitarianism.

    And why should I care about others preferences? What reasons exist to change my mind? Desirism answers this by citing the desires of others as motivation to bring about change in you. Preferences are not motivations for action. They are discerning principles between different action-plans for fulfilling desires. This is a subtle but critical difference.

    ReplyDelete
  5. >Why should we value utility?

    Whose utility? Mine, yours, Fred's? The question of whether you should value your own utility is moot; you do, and will always, value your own utility. And if you only can do one thing, asking whether you ought to do it or not doesn't seem like a viable question. Ought a person falling out of plane without a parachute hit the ground? They don't desire to hit the ground; they don't get utility from hitting the ground, but under both (or any!) moral systems, the state of the world will be the same.

    Now whether you should value anyone else's utility is a different matter. I can't give you an objective argument about why you should value another person's, but I can give you a subjective one.

    Example: Meet Kyle. Kyle's a weird person. He erupts into a violent rage and murders any person that he hears say the word "Satan." Thus, it is in your best interest to not say the word "Satan" around Kyle. So, if you prefer not to die, you should avoid that word. That's from your perspective.

    From my perspective, because I prefer to not see you get hurt, I should try to convince you, through punishment or reward if necessary, not to say that word. Similarly, I should try to stop Kyle from erupting in violent rages.

    But, in no case can I actually convince you to form one of my preferences without altering one of your already existing preferences through rewards and punishments. So, if you don't care about Kyle hurting people in the future, I can't convince you to help me stop him, except through a threat of punishment, or a promise of reward

    All I can do, as an outsider, is alter your future plans by informing you that helping me stop Kyle will increase your net utility (because I won't punish you, or because I will reward you). But I can't use an ethical system to force you to (which is distinct from convincing you of my ethical system, which would rewrite your preferences to be closer to mine).

    >Preferences are not motivations for action. They are discerning principles between different action-plans for fulfilling desires.

    It seems to me that desires can be constructed entirely on top of utility, the same way that preferences can. If you expect a utility of 3 by performing action A, and a utility of 5 by performing mutually exclusive and exhaustive action B, then you desire B strongly, and A weakly; or you prefer B to A.

    It also seems to me that any full set of desires can be ordered by preferance (Ex: A>B=C>D...), such that both preferences and desires are different ways of defining the same system (much like how you can count on your fingers or your toes, but in the end, it's a de-abstraction of numbers.)

    ReplyDelete
  6. @Pngwn
    >Whose utility?

    Anyone's. The question I asked was why *should* we value utility. I ask because you seem to be implying that utility has intrinsic value, that it is the thing we should all strive to maximize. The thing is, maximizing utility has NO INTRINSIC VALUE. One must first *value utility* in order to want to maximize it. Desires, on the other hand, are innate — they are part of our biology, they are the "is" — and are therefore foundational. I don't value my desires. They just are. I may desire to cease to exist. But ceasing to exist does me no utility.

    >The question of whether you should value your own utility is moot; you do, and will always, value your own utility.

    This is incorrect, and I think is the heart of the disagreement. It is the case that I can possibly not value utility to myself. If I desire that someone else's life be preserved at the cost of my own life, what utility is that to me? Take Alonzo's example of Alph on a distant planet with the moon Pandora orbiting it. Imagine Alph has only one desire, that “the moon Pandora continue to exist”. Alph then finds out that a giant meteor is going to destroy Alph. Alph is then presented with a button that will destroy the meteor, but at the cost of Alph's life. No utility is served by Alph pressing the button. In fact, all utility will immediately cease to exist. Nonetheless, Alph has every reason to press the button. If you are correct, that utility should be maximized, then Alph should not press the button.

    >And if you only can do one thing, asking whether you ought to do it or not doesn't seem like a viable question. Ought a person falling out of plane without a parachute hit the ground? …

    I'm not sure how this analogy applies. We wouldn't say a person falling out of an airplane "ought" to hit the ground (unless we just mean "it is the expected outcome").

    When you say we can do only one thing, I take that to mean that based on one's current set of desires and beliefs, one could have done no other thing at that moment. But the function of morality is NOT (as is usually presumed) to judge past behavior. It is to alter the likeliness of that decision being made in the future.

    >From my perspective, because I prefer to not see you get hurt, I should try to convince you, through punishment or reward if necessary, not to say that word. Similarly, I should try to stop Kyle from erupting in violent rages.

    Okay, I see you are extending the definition of "preference" to include a proposition and its negation. This is not usually how it is used — it is typically used in reference to two competing positive choices — but that is fine. We can just equate desires with preferences for the sake of this conversation.

    >It seems to me that desires can be constructed entirely on top of utility, the same way that preferences can. If you expect a utility of 3 by performing action A, and a utility of 5 by performing mutually exclusive and exhaustive action B, then you desire B strongly, and A weakly; or you prefer B to A.

    No, again, because the strength of a desire is based on biology, not utility. I may strongly care for a piece of paper written by my grandmother and would go to great lengths to preserve it, but the paper brings me very little utility.

    ReplyDelete
  7. >The question I asked was why *should* we value utility. I ask because you seem to be implying that utility has intrinsic value, that it is the thing we should all strive to maximize.

    I don't know if you should value utility. If you don't, and you are surrounded by people who won't punish you for not acting like you do, then maybe you shouldn't.

    It's probably worth nothing that we're likely talking about two different versions of the word "should." Your "should" is unconditional. My "should" is conditional. *If* you value something, then you purse actions that will lead to that something.

    I don't believe that the unconditional should has a basis in rational thinking.

    >It is the case that I can possibly not value utility to myself. If I desire that someone else's life be preserved at the cost of my own life, what utility is that to me?

    I didn't say that people all act to maximize their own utility, nor did I say that all decision are based entirely on them (some decisions are dare I say, irrational; others are based on heuristics). What I did say about utility is that they care about it.

    > No utility is served by Alph pressing the button.

    Alph has more than 1 desire. His end goal (desire) is that Pandora continues to exist. His proximal goal is to press the button. He earns utility for the completion of each of these goals

    The only way to eliminate proximal goals is to set up a brain wave detector that deterimine's the desire of the person and then acts based on it, in which case, the entire Alph-brain-wave-detector system becomes the agent under consideration, and all previous objections continue to apply.

    Alph still cares about utility. In considering a plan in which he saves Pandora from being destroyed, he earns utility.

    If he didn't consider that plan, then he isn't an agent capable of intentional action (intentional action requires that a person imagines a set of future states that differ based on their action).

    >I'm not sure how this analogy applies. We wouldn't say a person falling out of an airplane "ought" to hit the ground (unless we just mean "it is the expected outcome").

    That's my point! It's not a viable question to ask.

    >But the function of morality is NOT (as is usually presumed) to judge past behavior. It is to alter the likeliness of that decision being made in the future.

    I'm not sure how past and future behavior are related to what I was saying there. What I am saying applies to any event in which there is only 1 option.

    >Okay, I see you are extending the definition of "preference" to include a proposition and its negation. This is not usually how it is used — it is typically used in reference to two competing positive choices — but that is fine. We can just equate desires with preferences for the sake of this conversation.

    We can stick with the stricter definition if you want, but that will add needless complication because (A or not A) is essentially equivalent to (A or B) where B is any option that is mutually exclusive with A.

    >No, again, because the strength of a desire is based on biology, not utility

    But utility is also based on biology. So, a desire being based on biology is not an argument against it being based on utility.

    ReplyDelete
  8. //It's probably worth nothing that we're likely talking about two different versions of the word "should." Your "should" is unconditional. My "should" is conditional. *If* you value something, then you purse actions that will lead to that something.//

    Actually, while we may be talking about different scopes, my "should" is conditional as well. "Should" can be roughly translated as "reasons exist to". So if we say "I should repay debts", it is the same as saying "Reasons exist to repay debts". However, while an agent only acts on his own desires, the reasons that an agent HAS are only a small fraction of the reasons that EXIST (the desires of others). So the way I am using "should" is conditional on the desires of the population, not just the individual. But it is still conditional.

    //I don't believe that the unconditional should has a basis in rational thinking.//

    We are in agreement here.

    //I didn't say that people all act to maximize their own utility, nor did I say that all decision are based entirely on them (some decisions are dare I say, irrational; others are based on heuristics). What I did say about utility is that they care about it.//

    Perhaps I have a different concept of "utility" then you. Can you please explain how you are using it? It sounds like you are using it similar to how I would use "desires-as-means" versus "desires-as-ends". In which case, you would be correct. One values utility as a means to an end-desire.

    So when you are talking about "maximizing utility", you are saying the same as "carrying out action plans that fulfill the most and strongest desires/preferences". Is that right?

    //That's my point! It's not a viable question to ask.//

    Okay, well I'm not sure why you mentioned this in the first place then. What does having only 1 option have to do with the topic at hand?

    ReplyDelete
  9. I appreciate that my question was important enough to get its own post,but I still have some questions. Desirism says that the right action is the one that a person with good desires will perform. Now this raises the question what are good desires? Good desires are desires that tend to fulfill other desires-in virtue of which the people with those other desires have reasons to promote that desire universally using the social tools of reward/praise and punishment/condemnation. Now let's imagine ourselves in a society where the majority of people desire to thwart the desires of a small minority and also the desire to see the desires of the minority group thwarted. Now when you add up all reasons for action-all desires, the majority of desires are that this minority group has their desires thwarted. Now desirism says the right act is the act that a person with desires that tend to fulfill other desires would perform. It seems then that the right act would be to thwart the desires of the minority. Now you might say "but the desire to thwart the desires of the minority is a bad desires, that is it tends to thwart other desires" however desires themselves are not intrinsically good or bad- what makes a desire good or bad depends on what the desires of others are. So in this case does the desire to thwart the desires of the minority tend to fulfill or thwart other desires? Well it certainly thwarts the desires of the minority but that is only a fraction of all desires that exist. The desire to thwart the desires of the minority fulfills the desires of the majority that desires that the desires of the minority be thwarted. Therefore in this case the desire that tends to fulfill the desires of others is the desire to thwart the desires of the minority. Let's assume the desire that the desires of the minority be thwarted is a malleable desire. Are there reasons to promote the desire that the desires of the minority be thwarted? It seems there are. Promoting the desire to see desires of the minority thwarted will thwart the desires of the minority of course, but it will fulfill the desires of those that desire to see the desires of the minority thwarted-which make up the majority of all desires. I can present this as an argument.
    1. The majority of people desire to see the desire of a certain minority thwarted.(premise)
    2.The desire to thwart the desires of the minority tends to fulfill more desires then it thwarts. (from 1-there are more desires that the desires of the minority be thwarted then there are that the desires of the minority not be thwarted.
    3. The desire to thwart the desires of the minority is a good desire (from 2 and definition of good desire.
    3. The right act is the one a person with good desires would perform.
    4. A person with good desires would have the desire to thwart the desires of the minority. (From 2 and 3)
    5. The right act is to thwart the desires of the minority. (From 3 and 4)

    ReplyDelete