Some current discussion among members of the studio audience invites me to take a closer look at the concepts of ‘happiness’ and ‘desire fulfillment’.
One of the questions that I need to look at is whether they are the same thing, or if they are importantly different.
If the two concepts are the same, then their truth-conditions should co-vary. Specifically, if happiness and desire fulfillment are the same, then when a proposition about happiness goes from being true to false, a corresponding proposition about desire fulfillment should also go from being true to false. If they do not, then there is a difference between them. When we encounter a difference, we can see which option value tracks.
So, let’s take the case of Mary. Mary is sitting at her computer, reading an email she has received from her daughter-in-law, Susan. Susan and her husband (Mary’s son) Brad are with their children enjoying a vacation in Australia. At this point, Mary is happy.
At the very instant that Mary is reading the letter, a drunk driver hits the car that her son was driving. Her son and one of her grandchildren are killed. Her daughter in law is paralyzed, and her other grandchild suffers severe and irreversible brain damage. At the moment of the accident, the proposition, “Mary is happy” remains true. There is no sense in which it is reasonable to say that Mary’s happiness changed from one moment to the next. No observer, watching Mary’s behavior as she reads the letter, or measuring her vital signs, would notice any difference.
However, at the moment of the accident, the proposition, “Mary’s desire that her son and his family enjoy their vacation in Australia is being fulfilled,” goes instantly from being true, to being false. Mary’s desire that P is fulfilled only in a state where P is true. Because of the automobile accident, P is no longer true, so Mary’s desire is no longer fulfilled.
So, we see here that happiness and desire fulfillment are not the same thing. In this one circumstance (and there are is an infinite number of comparable examples) the truth conditions diverge – a happiness proposition remains true while a desire fulfillment proposition changes from true to false.
Is this an important difference?
The reason that this difference exists is because ‘happiness’ is a mental state, while ‘desire fulfillment’ is a relationship between a mental state and a state of affairs in the real world.
Because happiness is a mental state alone we can isolate happiness from the external world. Let us take Mary’s brain state as she reads the letter – a state in which Mary is happy – and preserve it. Let’s put the brain in an infinite loop. In this state, Mary thinks that she is going to her computer, turning it on, happily reading the email, finishing the email, going to the kitchen, pouring herself a cup of coffee, going into the computer room (without any memory of the earlier event), finding and reading the email, and so forth.
Once Mary is in this infinite loop, we can do anything we want in the external world and it cannot affect her happiness.
If it is true that value tracks happiness, than the world in which Mary’s children and grand children get in the wreck is no more or less valuable to her than the world in which they do not get in a wreck. One of the things that happiness theory implies is, “What you do not know cannot hurt you.” Mary is not made unhappy by the accident. Mary is made unhappy by learning about the accident. The accident does not affect her brain states. The discovery that the accident took place is what harms her. So, by promoting ignorance (of things people do not want to hear) we can prevent harm.
The discussion in the studio audience on this issue brings up the possibility that happiness is still the only thing that matters. The accident, after all, cost the family in Australia the loss of a great deal of happiness.
But let’s remove this variable. Let’s take the family in Australia and put their brains in the same loop. Before the accident, they were enjoying the day snorkeling at the Great Barrier Reef. So, we take their brains at the time when they were the happiest and we lock them in a loop.
In fact, let us take everybody’s brain and lock it in a loop when that person was the happiest. We will set up machines to monitor these brain states – machines that we will assume have no chance of breaking down.
Under the happiness theory of value, this would be utopia. Nothing could be better than to have all these brains experiencing nothing but their best state of happiness in perpetuity.
That is, if the happiness theory of value were correct.
Yet, some people look upon this description and shudder. They do not see this as the best of all possible worlds. They see this as a horrendously meaningless existence. In terms of happiness, nothing can be better. If something is better than this, than that something must be a state in happiness is sacrificed in favor of something else of value.
Happiness is, indeed, one of the things that we value. But there are others, such that people are willing to pay a little less happiness in order to purchase this “something else”. They are willing to endure a little suffering, if it brings them more of this “something else”.
Desire fulfillment theory explains why events external to our brain states matter. A ‘desire that P’ is a mental state that motivates an agent into realizing a state in which P is true. A state in which P is not true (even if the agent falsely believes that it is true) has no value – at least as it relates to that desire.
Value is not a brain state. Value is a relationship between a brain state and a state of affairs in the world. Alter the state of affairs and value can instantly vanish. We do not have to wait for the agent to find out about it.
Desire and Motivation
From Atheist Observer:
you have desires that are not happiness or satisfaction related. Fine. Why do you have them? Where did they come from and how were they acquired?
Consider the fact that these are the two options. (1) Nature molded us to be concerned with one thing and one thing only, and that is whether our brain is in a particular state – a state of ‘being happy’. (2) Nature molded us to have a number of concerns to create states in the real world – states, for example, that result in our genetic replication, the survival of our children to the point and in a condition in which they can have children of their own.
Why would nature mold us to have one and only one concern, that being the concern that our brain is in a particular state? How did that happen?
Assume that you were building a robot that you would wish to see survive a hostile environment. Your robot can be damaged by excessive heat. So, you program your robot so that it can measure temperature differences and so that it moves away from unusually hot locations. In other words, you provide your argument with primitive versions of ‘beliefs’ about the temperature and a primitive form of an ‘aversion’ to high temperatures.
Also, a fall might harm your robot. Therefore, you program your robot with a way of sensing how far it would fall under different circumstances. You also program it with a primitive aversion to states of affairs in which there is a significant risk of falling far enough to cause harm.
Of course, circumstances arise in which the robot must make a choice between entering an area with higher temperatures or risking a fall. So, you give these desires a rank – and built it so that it performs the action that fulfills the stronger of its two desires.
Finally, you fine-tune your robot a little. You make the strength of an aversion proportional to the measure of the state to which one is averse. So, the robot has a stronger aversion to entering a higher-temperature region than to entering a lower-temperature region. It has a higher aversion to falling a longer distance than to falling a shorter distance.
In comparing these desires, the robot, if faced with a choice between entering a region with moderately high temperature or falling a great distance, will choose the moderately high temperature. If faced with a choice between a region with very high temperature or falling a moderate distance, it will choose to fall a moderate distance. It takes that action that fulfills the more and stronger of its desires (or, in this case, that avoids the more and stronger of its aversions).
The point to note here is that there is no need for happiness. Your robot is not programmed to realize a state of happiness. Your robot is programmed to avoid a state of high temperature or a risk of falling a great distance.
I can illustrate this same point by looking at a simple example. My cat walks into the kitchen for some food. One explanation for my cat’s behavior is that my cat wants to eat something and knows that there is food in the kitchen. Another explanation for my cat’s behavior is that my cat wants to be happy, believes that eating food will make it happy, and believes that there is food in the kitchen.
The first explanation is the simplest. We have every reason to stick with the first unless and until we have compelling reason to complicate our description by adding complexities.
I assert that I am not much different from the cat. When I wander into the kitchen it is not because I have a desire for happiness and suspect that something in the kitchen might provide me with happiness. I just want something to eat – that’s all. Happiness, if it comes, is a side-effect; icing on the cake, as it were.