Monday, June 22, 2009

The Harmony of Desires

A member of the studio audience has given me reason to go a little deeper into the nature of desire.

People are not actually moved by the desire to maximize the harmonicity of desires. They have more specific and diverse desires.

This is true, and this is exactly what desire fulfillment theory claims.

Consider a being that has a desire that A, and a stronger desire that B.

Such a being would prefer a state of affairs in which both A and B are true. If this option is not available, the next best thing is a state in which B is true. Failing that, the being would settle for a state in which A is true. The being would be indifferent to a state in which neither are true.

(Note: An aversion is a negative desire, not the absence of a desire. This agent is not averse to the state in which neither A nor B is true. The agent simply sees no value in a state. An aversion is a negative desire (a desire that not-A. It is not the absence of desire (a not desires that A.)

I do not need to postulate – in fact, it would be foolish to postulate, a third desire – a 'desire to maximize the harmonicity of desires.' What would this give me? Now, we would have three desires – a desire that A, a desire that B, and this desire that C where C = "the maximization of the harmonicity of desires". This does not answer any questions, because we still need to figure out how this desire that C fits with the original desires.

Think of a hot air ballon being launched. It has a force causing it to go up. It has another force (wind) causing it to move to the east. The result is that it will move up and to the east. We do not need to postulate a principle of physics that says that things should ‘maximize the harmonicity of the forces acting upon it.’ All we need are the two forces themselves. They will do all the work.

The presence of a desire that A motivating the agent to realize states of affairs in which A is true, and a desire that B motivating the agent to realize states of affairs in which B is true, is sufficient. The agent then has sufficient reason to act so as to realize a state of affairs in which both A and B are true.

Now, let us introduce a second person into this system. Furthermore, we give our agent the option of choosing to give this second person a desire that D, or a desire that E. A desire that D will motivate our new agent to act in ways that will bring about a state in which A and B are both true. A desire that E will bring about a state in which neither is true (or only one is true).

Clearly, our agent has reason to act to give this new agent a desire that D.

Again, I do not need to postulate any type of desire to maximize the harmonicity of desires. I do not even need to postulate altruism or empathy. I need nothing but the desire that A and the desire that B.

Again, I do not need to add anything else to the mix. The fact that the agent has a desire that A and desire that B is sufficient to give him a reason to choose harmonious desires for other agents.

[T]he idea that all desires should be fulfilled equally ignores their separate nature (so that one sense can overcome the other), the existence of other intuitive desires.

There is no idea that all desires should be fulfilled equally. There is only the desires themselves motivating agents to act so as to realize states of affairs in which the propositions that are the objects of their desires are true.

Our agent, in the case above, does not need an idea that his desire that A should be fulfilled equally with his desire that A. He simply has a desire motivating him to realize a state of affairs in which A is true (that fulfills desire A), and a desire that B motivating him to realize a state of affairs in which is true (that fulfills desire that B). This agent is motivated to choose a state in which both A and B are true above all others.

And this agent, having the motivation to realize states of affairs in which A and B are true, has otive to cause others to have desires compatible with realizing those states of affairs.

An idea that all desires should be fulfilled equally does no work. It serves no purpose, and can be discarded. So, it is no objection to this theory that I cannot defend such a thesis.

12 comments:

  1. Again, great stuff. This is a point that my own readers rarely seem to get, and I can link them here from now on. :)

    ReplyDelete
  2. "The fact that the agent has a desire that A and desire that B is sufficient to give him a reason to choose harmonious desires for other agents."

    Not true. The fact implies he has a reason to choose desires for others that will serve his desires, not to choose ones that will be harmonious. Pile on more newcoming agents, and this will become clear.

    Your dictum is that "Desire utilitarianism ... identifies a good desire as one that tends to fulfill other desires, and bad desires as those that tend to thwart other desires [http://atheistethicist.blogspot.com/2007/01/answering-m-on-subjectivism.html]]. Yet you also say, "this agent, having the motivation to realize states of affairs in which A and B are true, has [m]otive to cause others to have desires compatible with realizing those states of affairs.", implying this is the "good".

    This is a contradiction. Decide - should the agent promote desires that advance A and B, or ones that tend to promote other desires? Should we promote all desires equally, and consider only how they tend to be harmonious, or do we want to treat them un-equally, according to what desires we value? The two can often be one and the same, but they are not logically so, and is some cases won't be.


    How can you claim you do not want to fulfill all desires equally, when your definition of "good" is founded on calculating the weighted sum of all desires [i.e. how much they "tend" to fulfill other desires], treating them all equally? You are implicitly assuming that all desires should be treated equally when you weigh them this way.

    I am afraid subjectivism is true - calling the weighted sum "good" is but rhetorical trickery, the fact remains that it just an abstract calculation, not anyone's desires. Each agent should act to fulfill his desires, not this sum, from the very definition of what "his desires" mean.

    ReplyDelete
  3. Alonzo -

    What does DU say about "private morality"? (This is the term that some use; I'm not sure it's correct.)

    Example:

    I am the only person in the Universe. I have certain desires that I'd like to fulfill. However, I find myself doing some things that end up thwarting my own desires. Specifically, sometimes I lack the will-power to do what I need to do to fulfill my own desires. I want to change my own desires (increase their strength) such that I will be more motivated to fulfill the desires in question.

    Are these moral issues? Or what?

    ReplyDelete
  4. "Not true. The fact implies he has a reason to choose desires for others that will serve his desires, not to choose ones that will be harmonious. Pile on more newcoming agents, and this will become clear."

    As long as we are talking about desires and not actions, there is no difference.

    ReplyDelete
  5. I obviously need to add something to the statement above.

    Obviously, it is not the case that a desire which tends to fulfill my desires is not going to be the same as a desire that tends to promote all desires.

    However, I am not the only agent in society. There are other agents, and they have reason to promote desires that tend to fulfill their desires (and the desires of those they care about).

    In developing a language, I can rest assured that society in general will have no interest in that which fulfills my desires. However, people generally, in inventing a language, does have reason to be concerned with that which tends to fulfill other desires. Indeed, people generally have many and strong reasons to promote those desires that tend to fulfill other desires, and little or no reason to be concerned with promoting those desires that tend to fulfill my desires.

    Morality language evolves around what people generally have reason to be concerned with.

    ReplyDelete
  6. "This is a contradiction. Decide - should the agent promote desires that advance A and B, or ones that tend to promote other desires?"

    Neither.

    First, I want to warn that the term 'should' is ambiguous. It is extremely easy to equivocate between the different meanings. It is no contradiction to say, "An agent should do X" and "An agent should not do X" at the same time - using two different definitions of the word.

    For example, it may be the case that an act will fulfill the most and the strongest of the desires of the agent, but would not fulfill the desires of an agent with good maleable desires. In this case, the agent practical-should do X (e.g., murder the witnesses), but this does not imply that he moral-should do X.

    With this in mind, desire utilitarianism will eventually get to a moral-should of, "The agent should perform that action that a person with good maleable desires would perform."

    However, we are not anywhere near that yet.

    Here, we are still talking about the fundamental relationships between desires, reasons for action, and state of affairs.

    The first agent, in this case, has the most and strongest reason to give the new agent the desire that will tend to realize a state in which both A and B are true.

    Do you disagree with this conclusion?

    Don't try to read a moral-should into this. It's too early for that. Just look at the reasons for action that exist.

    ReplyDelete
  7. "How can you claim you do not want to fulfill all desires equally, when your definition of "good" is founded on calculating the weighted sum of all desires [i.e. how much they "tend" to fulfill other desires], treating them all equally? You are implicitly assuming that all desires should be treated equally when you weigh them this way."

    No, I am not.

    Any more than a physicist who says that the result of several forces operating on an object is the vector sum of those forces. He is not declaring a universal moral 'ought' that says that the moral universe requires that forces all be considered and that the vector-sum of the result is the obligatory answer.

    It appears to me that you are begging the question. You are assuming that strict subjectivism must be true, so you are fishing for the subjective 'ought' that is being assumed in this argument. You are so determined to find it that, when you do not find one, you invent one and throw it in.

    Your objections have been consistently of the form, "But, if you throw a moral assumption in here then you get strange results." My answer has consistently been, "Why are you throwing a subjective moral assumption in there? I never said anything about such an assumption - and you have not demonstrated that my argument fails without it."

    ReplyDelete
  8. Kip

    There is no such thing as 'private morality'.

    Certain intrinsic-value theories suggest the possibility of certain actions being intrinsically good or bad, even if performed by a person living alone in the universe. However, since intrinsic values do not exist, the implications of intrinsic value theory (including a private morality) does not exist either.

    Now, a person can have conflicting desires.

    Furthermore, future desires have no effect on present action. It takes a present desire that future desires be fulfilled to affect present action. Even here, this current desire is one desire among many, and must be weighed against them.

    So, a person can have a current desire that tends to thwart other current desires, and is more likely to have current desires that tend to thwart future desires.

    In the first case, the agent also has current reasons for action that exist to get rid of those desires that tend to thwart other current desires. He has reason to bring current desires into conflict with each other.

    The 'lack of will power' that you speak of, however, has to do with current desires that the agent can well know will thwart future desires but that have no power to influence current action. In these cases, the agent knows that an act will thwart future desires and is also (relatively) powerless to prevent it.

    ReplyDelete
  9. Alonzo> "In these cases, the agent knows that an act will thwart future desires and is also (relatively) powerless to prevent it."

    Can't the person change his own desires? (Over time, using various tools similar to the ones that society uses to change desires of others?) Ascetics seem to be able to do this to some extent.

    This brings up another question, though. What timeframe does DU consider when calculating desire fulfillment? Shouldn't future desires be considered? But, if they are, then how far in the future? And if they aren't, then it seems that "making the world a better place" only examining the current desires might end up making the world a worse place in the future.

    ReplyDelete
  10. Alonzo

    "It appears to me that you are begging the question. You are assuming that strict subjectivism must be true, so you are fishing for the subjective 'ought' that is being assumed in this argument. You are so determined to find it that, when you do not find one, you invent one and throw it in."
    You have re-discovered my issue with Yair and since he refused to define morality except as subjective and so a priori was unable to argue for subjectivism let alone compare his conclusion to others such as DU, I gave up on him. Hopefully you might find a way.

    ReplyDelete
  11. "It appears to me that you are begging the question. You are assuming that strict subjectivism must be true, so you are fishing for the subjective 'ought' that is being assumed in this argument. You are so determined to find it that, when you do not find one, you invent one and throw it in."
    I'm assuming moral theories should be prescriptive - they should convince people to act in certain ways. This leads me to the conclusion of subjectivism, and to object to you introducing a non-subjective yet moral ought. Are you saying your theory is not prescriptive?

    [Me] "You are implicitly assuming that all desires should be treated equally when you weigh them this way."

    No, I am not.

    Any more than a physicist who says that the result of several forces operating on an object is the vector sum of those forces.

    But the physicist is not prescriptive. We can, after all, introduce lots of ways of weighing desires - just like a physicist can weigh forced to get the sum force, or weigh moments to get the overall force moment, and so on. Why choose this particular sum?

    When you weigh things in a moral calculus, you imply this is how to weigh things to be prescriptive, that people morally-should consider them this way when making choices. If this weighing treats all desires equally, you are advocating the moral equality of desires.

    The first agent, in this case, has the most and strongest reason to give the new agent the desire that will tend to realize a state in which both A and B are true.

    Do you disagree with this conclusion?

    Sure.

    ReplyDelete
  12. Sure was a mistake there - sure I agree with you, not sure I disagree...

    ReplyDelete