I am taking advantage of a vacation (at the time of writing) to Germany to address some substantive comments concerning desirism (desire utilitarianism) as an ethical theory.
In late May of this year I received a series of comments from Richard Carrier. There were actually quite a few comments so I am going to ignore what I think are side issues and focus on what I see as more significant differences.
As I see it, the main difference is that Carrier appears to be an act utilitarian, whereas I am a desire utilitarian. Carrier’s writing suggest, at last to me, that he holds actions to be the primary object of moral desires, whereas I hold that morality is primarily concerned with the evaluation of desires. Also, Carrier seems to hold that actions are to be evaluated in terms of the maximization of happiness. Whereas I hold that desires are to be evaluated according to their tendency to fulfill other desires.
Another point that must be made is that Carrier takes our differences to be small and possibly non-existent. If the above description is correct, I would disagree with that assessment. However, this claim on Carrier’s part suggests that the above interpretation is not correct.
In actual fact, humans operate on a system of dispositions . . . which have causal consequences on the whole gamut of their decision making, which in turn has an aggregate effect on the conditions of their life . . . which in turn affects their baseline of happiness.
First, do not know what a "baseline of happiness" is supposed to mean.
A person may be able to make the case that such things as, “[W]hat sorts of friends they believe they have, and whether they believe the cops are hunting them down” might have an effect on happiness. A person’s happiness can be dramatically affected by what they believe is true about the world around them. Assume that somebody has just won a major lottery but has not yet been told about it. The change in happiness does not come when they win the lottery. It comes when they find out about it.
Yet, a person can want to win the lottery, even if she knows that the lottery drawing will take place after she has died and the news has no ability to affect her happiness. What she wants, in this case, is not happiness. What she wants is whatever will be made true by the fact that she has won the lottery.
With respect to imaginary cases such as this, Carrier asks:
[D]o we want to know (a) what people will do in this or that situation (imaginary or real), or (b) what they would do if they were fully informed and thought everything through?
Actually, as far as I am concerned, the best theory of intentional action can handle both cases. It can handle the case in which a person acts on full knowledge, and with limited knowledge.
Carrier says that the answer to the first question will give us descriptive ethics, but the answer to the first question will give us prescriptive ethics. I would argue that the answer to this question will tell you what the agent practically ought to do (what is prudent), but it will not tell us anything about morality.
First, these questions have an answer even if the universe has a single person. Yet, I hold that there is no morality unless there are multiple agents with potentially conflicting desires. A lone person, no matter what they do, can act imprudently, but not immorally.
Second, what a person would do, even if fully informed, is fulfill the most and strongest of his desires, given his beliefs. A "fully informed" person who loves to rape and mutilate children who has "thought everything through" and figured out a way to rape and mutilate a child with no chance of getting caught will rape and mutilate a child. Yet, it would be difficult to count such an act as moral.
The question that I hold needs to be asked and answered to give us prescriptive ethics is, "What malleable desires to people generally have the most and strongest reason to promote in people generally?" It is also relevant that the tools for promoting or inhibiting desires are praise, condemnation, reward, and punishment. Where there is no social need for these practices, there is no institution of morality.
People, if they wish, can assign the term “morality” to the selection of desires that are useful in relation to their other desires. This is a semantic question, like the question of whether the term ‘planet’ should be defined in such a way that includes Pluto. No astronomical facts are affected by whether or not we are going to call Pluto a planet, and no moral facts are affected by whether or not we call the choice of desires by an agent alone in the universe relative to his other desires is called 'moral'.
Where it comes to right action, I do not ask what a person would do if fully informed. I ask what a person would do if fully informed and having those desires that people generally have reason to promote and not having those desires that people generally have reason to inhibit. The person with the desire to rape and torture children, then, would not count as moral even if he discovers a way of acting on his desire with impunity. He would still be immoral in virtue of having desires that people generally have reason to inhibit.