ACIn this posting I would like to address a common form of argument against desire utilitarianism in order to show that the argument is far weaker than those who use it seem to be aware of.
It is the form of argument in which one creates some sort of fictional situation involving relationships between actions and desires and between desires and other desires. They state that, "In this situation, desire utilitarianism implies Q." Q turns out to be something we tend not to want to be implied by a moral theory. They then use this to argue that this counts as reason to reject desire utilitarianism.
Common examples involve stories where the author attempts to draw the conclusion that desire utilitarianism would condone murder, or the torture of a child, or genocide, or some other great evil that the reader is uncomfortable with endorsing.
This form of argument has a number of weaknesses. I will discuss three.
Typically, desire utilitarianism does not actually imply the conclusion that the agent claims it does. Many of these arguments confuse desire utilitarianism (the primary object of evaluation if desires which are evaluated according to whether they generally tend to fulfill other desires), with desire-fulfillment act utilitarianism (the primary object of evaluation is actions which are evaluated in each instance according to whether or not they will maximize the fulfillment of all desires.
Desire utilitarianism allows that there will be specific acts that will generally thwart desires. However, before we condemn the act we have to ask what the world ould be like in the absence of the desire that motivated the action. If the world would be a worse place, we keep the desire, even if it does motivate harmful actions in some set of rare and exotic circumstances.
However, let's imagine that the story gets the conclusion right. Furthermore, it yields a conclusion that the reader does not like. The reader does WANTS the right answer to be something else.
Where is it written that if a reasoned argument reaches a conclusion that an individual does not like, that this proves that the reasoned argument must be flawed? People have an annoying tendency of asserting that our “moral intuitions” are so flawless that if any reasoned argument comes into conflict with a moral intuition that the moral intuition must be preserved.
I hold that moral intuitions are nothing but learned prejudices. Historic examples from slavery to the divine right of kings to tortured confessions of witchcraft or Judaism to the subjugation of women to genocide all point to the fallibility of these 'moral intuitions'. There is absolutely no sense to the claim that its conclusions are to be adopted before those of a reasoned argument.
In fact, the prejudice that we have 'moral intuitions' that are superior to any type of reasoned argument is a groundless conceit – something children should be warned against the instant they can understand the warning.
However, the most important objection rests in the response, "Okay, so what other types of reasons for action exist to get the results you want?"
If the individual has truly considered all of the reasons for action that exist, yet insists on getting a different answer. The only way this can happen is if the individual introduces some other type of reason for action other than desires.
This part is always missing. Whenever I start to read this style of objection, I ask myself whether the person raising it is going to give me evidence – any evidence at all – for a type of reason for action that exists other than desire. If he does not, then the fact that he does not like the conclusion that comes up when we consider the reasons for action that do exist is irrelevant – there are no more reasons for action that exist that can change that answer.
So, these are the three hurdles that somebody is going to have to clear when they present a story that says that desire utilitarian yields a conclusion that they do not like.
The first hurdle is to show that the unwanted conclusion is actually a conclusion from desire utilitarianism rather than (for example) desire fulfillment act utilitarianism.
The second hurdle is to justify the step that goes from, "I don't like that particular implication," to the conclusion that "It is your theory, rather than my feelings, that are the problem here. My feelings cannot possibly be subject to error, so the error must be in your theory."
The third hurdle is to come up for reasons for action that exist other then, or in addition to, desires.