Some current discussion among members of the studio audience invites me to take a closer look at the concepts of ‘happiness’ and ‘desire fulfillment’.
One of the questions that I need to look at is whether they are the same thing, or if they are importantly different.
If the two concepts are the same, then their truth-conditions should co-vary. Specifically, if happiness and desire fulfillment are the same, then when a proposition about happiness goes from being true to false, a corresponding proposition about desire fulfillment should also go from being true to false. If they do not, then there is a difference between them. When we encounter a difference, we can see which option value tracks.
So, let’s take the case of Mary. Mary is sitting at her computer, reading an email she has received from her daughter-in-law, Susan. Susan and her husband (Mary’s son) Brad are with their children enjoying a vacation in Australia. At this point, Mary is happy.
At the very instant that Mary is reading the letter, a drunk driver hits the car that her son was driving. Her son and one of her grandchildren are killed. Her daughter in law is paralyzed, and her other grandchild suffers severe and irreversible brain damage. At the moment of the accident, the proposition, “Mary is happy” remains true. There is no sense in which it is reasonable to say that Mary’s happiness changed from one moment to the next. No observer, watching Mary’s behavior as she reads the letter, or measuring her vital signs, would notice any difference.
However, at the moment of the accident, the proposition, “Mary’s desire that her son and his family enjoy their vacation in Australia is being fulfilled,” goes instantly from being true, to being false. Mary’s desire that P is fulfilled only in a state where P is true. Because of the automobile accident, P is no longer true, so Mary’s desire is no longer fulfilled.
So, we see here that happiness and desire fulfillment are not the same thing. In this one circumstance (and there are is an infinite number of comparable examples) the truth conditions diverge – a happiness proposition remains true while a desire fulfillment proposition changes from true to false.
Is this an important difference?
The reason that this difference exists is because ‘happiness’ is a mental state, while ‘desire fulfillment’ is a relationship between a mental state and a state of affairs in the real world.
Because happiness is a mental state alone we can isolate happiness from the external world. Let us take Mary’s brain state as she reads the letter – a state in which Mary is happy – and preserve it. Let’s put the brain in an infinite loop. In this state, Mary thinks that she is going to her computer, turning it on, happily reading the email, finishing the email, going to the kitchen, pouring herself a cup of coffee, going into the computer room (without any memory of the earlier event), finding and reading the email, and so forth.
Once Mary is in this infinite loop, we can do anything we want in the external world and it cannot affect her happiness.
If it is true that value tracks happiness, than the world in which Mary’s children and grand children get in the wreck is no more or less valuable to her than the world in which they do not get in a wreck. One of the things that happiness theory implies is, “What you do not know cannot hurt you.” Mary is not made unhappy by the accident. Mary is made unhappy by learning about the accident. The accident does not affect her brain states. The discovery that the accident took place is what harms her. So, by promoting ignorance (of things people do not want to hear) we can prevent harm.
The discussion in the studio audience on this issue brings up the possibility that happiness is still the only thing that matters. The accident, after all, cost the family in Australia the loss of a great deal of happiness.
But let’s remove this variable. Let’s take the family in Australia and put their brains in the same loop. Before the accident, they were enjoying the day snorkeling at the Great Barrier Reef. So, we take their brains at the time when they were the happiest and we lock them in a loop.
In fact, let us take everybody’s brain and lock it in a loop when that person was the happiest. We will set up machines to monitor these brain states – machines that we will assume have no chance of breaking down.
Under the happiness theory of value, this would be utopia. Nothing could be better than to have all these brains experiencing nothing but their best state of happiness in perpetuity.
That is, if the happiness theory of value were correct.
Yet, some people look upon this description and shudder. They do not see this as the best of all possible worlds. They see this as a horrendously meaningless existence. In terms of happiness, nothing can be better. If something is better than this, than that something must be a state in happiness is sacrificed in favor of something else of value.
Happiness is, indeed, one of the things that we value. But there are others, such that people are willing to pay a little less happiness in order to purchase this “something else”. They are willing to endure a little suffering, if it brings them more of this “something else”.
Desire fulfillment theory explains why events external to our brain states matter. A ‘desire that P’ is a mental state that motivates an agent into realizing a state in which P is true. A state in which P is not true (even if the agent falsely believes that it is true) has no value – at least as it relates to that desire.
Value is not a brain state. Value is a relationship between a brain state and a state of affairs in the world. Alter the state of affairs and value can instantly vanish. We do not have to wait for the agent to find out about it.
Desire and Motivation
From Atheist Observer:
you have desires that are not happiness or satisfaction related. Fine. Why do you have them? Where did they come from and how were they acquired?
Consider the fact that these are the two options. (1) Nature molded us to be concerned with one thing and one thing only, and that is whether our brain is in a particular state – a state of ‘being happy’. (2) Nature molded us to have a number of concerns to create states in the real world – states, for example, that result in our genetic replication, the survival of our children to the point and in a condition in which they can have children of their own.
Why would nature mold us to have one and only one concern, that being the concern that our brain is in a particular state? How did that happen?
Assume that you were building a robot that you would wish to see survive a hostile environment. Your robot can be damaged by excessive heat. So, you program your robot so that it can measure temperature differences and so that it moves away from unusually hot locations. In other words, you provide your argument with primitive versions of ‘beliefs’ about the temperature and a primitive form of an ‘aversion’ to high temperatures.
Also, a fall might harm your robot. Therefore, you program your robot with a way of sensing how far it would fall under different circumstances. You also program it with a primitive aversion to states of affairs in which there is a significant risk of falling far enough to cause harm.
Of course, circumstances arise in which the robot must make a choice between entering an area with higher temperatures or risking a fall. So, you give these desires a rank – and built it so that it performs the action that fulfills the stronger of its two desires.
Finally, you fine-tune your robot a little. You make the strength of an aversion proportional to the measure of the state to which one is averse. So, the robot has a stronger aversion to entering a higher-temperature region than to entering a lower-temperature region. It has a higher aversion to falling a longer distance than to falling a shorter distance.
In comparing these desires, the robot, if faced with a choice between entering a region with moderately high temperature or falling a great distance, will choose the moderately high temperature. If faced with a choice between a region with very high temperature or falling a moderate distance, it will choose to fall a moderate distance. It takes that action that fulfills the more and stronger of its desires (or, in this case, that avoids the more and stronger of its aversions).
The point to note here is that there is no need for happiness. Your robot is not programmed to realize a state of happiness. Your robot is programmed to avoid a state of high temperature or a risk of falling a great distance.
I can illustrate this same point by looking at a simple example. My cat walks into the kitchen for some food. One explanation for my cat’s behavior is that my cat wants to eat something and knows that there is food in the kitchen. Another explanation for my cat’s behavior is that my cat wants to be happy, believes that eating food will make it happy, and believes that there is food in the kitchen.
The first explanation is the simplest. We have every reason to stick with the first unless and until we have compelling reason to complicate our description by adding complexities.
I assert that I am not much different from the cat. When I wander into the kitchen it is not because I have a desire for happiness and suspect that something in the kitchen might provide me with happiness. I just want something to eat – that’s all. Happiness, if it comes, is a side-effect; icing on the cake, as it were.
4 comments:
Alonzo,
You have made a couple of logical errors. First you say DU explains why external things matter. It does not. It states that they matter to fulfill desires, but does not explain why we want to fulfill these desires.
Second, you confuse conceptual simplicity with biological simplicity. Conceptually it is simpler to have no underlying reward system for a particular desire, but biologically it is far simpler to have one or a few reward systems which can be used to motivate a wide variety of desires than to evolve a different motivation or reward system for each new desire.
I don’t know if happiness is the only motivator, or even if happiness is a particularly good descriptive word for the underlying motivation reward state, but I think it highly likely that the fundamentals of behavior motivation depend on no more than the minimum processes and mechanisms that evolution has found necessary for us to survive.
Atheist Observer
(1) I do not need to explain why we want to fulfill these desires because I deny that we want to fulfill these desires.
A desire that P is a brain state that motivates an agent to bring about a state of affairs in which P is true. The desire explains why the agent wants P. The desire also explains the motivation to bring about P, because the desire that P and the motivation to bring about P are the same thing. They are two different descriptions of the same brain state.
You want me to explain "why we want to fulfill these desires". However, to say that we want to fulfill these desires is to say that we desire to fulfill these desires. Or, it is to say that we "desire that P" where P = "that our desires are fulfilled."
If I am required to give an answer to this question then I must also be required to give an answer to the next question:
Why do we want to fulfill our desire that we fulfill our desires?
Which leads to the next question: Why do we want to fulfill our desire to fulfill our desire to fulfill our desires?
Your question contains an assumption that leads to an infinite regress. This suggests that there is something wrong with the question. What is wrong with the question is that it assumes that motivation is something separate from the desire. It takes a desire and says, "Where is the motivation to fulfill this desire?"
A desire is a motivational state. The motivation to fulfill a desire is built into the desire itself. Indeed, if an agent is not motivated to act so as to bring about P, then it would be a mistake to say that he desires that P.
(2) It is not, in fact, simpler to have one reward system that motivates a group of different actions.
Let us say that an agent desires H, and that they do actions A1, A2, A3, A4, and A5 because they are means to bring about H. You still have to postulate five different pathways to H - plus you have to postulate H.
If, instead, you postulate that an agent desires A1, A2, A3, A4, and A5 directly. This turns out to actually being simpler than postulating five different pathways to H.
Another problem with the happiness theory is the incommensurability of value - that fact that there is something missing when we obtain one good and sacrifice another. This suggests that we have multiple goods, not just one good (H).
And there is the question of why evolution would mold us to realize a particular brain state than to realize states in the real world. It's states in the world that are responsible for our genetic replication, not brain states.
I did not intend the question to be the nonsensical “why do we want to do the things we want to do?” but the question you seem never to want to answer, “Where do these desires come from?”
It’s relatively easy to determine certain chemical states drive one to seek food, and there are hormones that drive sex urges, but how does one come to have a desire to make the world a better place? “I want to make the world a better place because I have a desire to make it a better place” explains nothing, it’s just circular.
The idea we seek to make propositions true is philosophically interesting, but it’s almost certainly not consciously true for virtually all animal behavior, including some very complex ones. Since we’re animals that means either 1) we have a totally different motivational system than all other animals, 2) We have some sort of tacked-on additional motivational system, or 3) this whole “seeking to make propositions true” concept does not accurately describe our motivational system at all.
In your example you have an agent with desires A1, A2, A3, A4, and A5. I can easily propose that the agent desires H and sees all these as ways to get to H. Without H, why does the agent have these desires?
The issue of incommensurability of value can only be used if one demands happiness be a one-dimensional vector. If one allows happiness instead to be a two dimensional plane or three dimensional space, based on some combination of three brain chemicals, say serotonin, dopamine, and adrenaline, it’s quite possible one could enjoy the sensation of happiness at one point in this space, but realize that the sensation of happiness could be qualitatively different at another point.
As to your last question, all evolution can do is work with out brain states. Our genes can’t control the external world. As you have so often stated, the external world has no value at all without desires. What we have to establish is how evolution can tell the brain what things out there in the real world are good and bad. Some things it gives us an instinctive fear of. Some things it teaches us through pain. Some things it rewards us through pleasure. You claim this is not all. I’d just like to know what other mechanisms you think evolution uses to give us our view of what real things out there we should desire to be true.
An interesting discussion, if I can stir the pot:-)
@atheistobserver:The idea we seek to make propositions true is philosophically interesting, but it’s almost certainly not consciously true for virtually all animal behavior, including some very complex ones.
Who says it needs to be "consciously true". desire here is a description of a brain state. In our case, usually although not always, we can verbalise out desire. The fact that animals cannot and may not even be conscious of their desires does not alter this as a description of a type of brain state.
@atheistobserver:Since we’re animals that means either 1) we have a totally different motivational system than all other animals,
This does not follow at all. Please explain how they are fundamentally different.
@atheistobserver: 2) We have some sort of tacked-on additional motivational system,
No rather we have an expanded motivational systems - due to the capacities for imagination and symbolisation the targets of our desires is greatly expanded beyond those of animals, plus we can verbalise and analyse these desires which animals cannot do either. Still underlying this is the same type of motivational system for both us and other animals.
@atheistobserver:or 3) this whole “seeking to make propositions true” concept does not accurately describe our motivational system at all.
I cannot see how this follows given the above.
@atheistobserver: I can easily propose that the agent desires H and sees all these as ways to get to H. Without H, why does the agent have these desires?
You can propose this yes, but what is your evidence that this is the case. This looks like (not just from this quote) that Happiness is an intrinsic value and the only end to evaluate all desires against and they are all means to that end. Surely this falls prey to the Euthyphro dilemma. Is it good because it makes you happy or does it make you happy because it is good. Well there are numerous problems if you take the first horn of this dilemma and if you take the second then happiness has not added anything substantive to the debate.
@atheistobserver:The issue of incommensurability of value ...If one allows happiness instead to be a two dimensional plane or three dimensional space, based on some combination of three brain chemicals, say serotonin, dopamine, and adrenaline, it’s quite possible one could enjoy the sensation of happiness at one point in this space, but realize that the sensation of happiness could be qualitatively different at another point.
Your idea of happiness is becoming very nebulous with this multi-dimensional formulation and why not avoid talk of "happiness" and just talk about these chemicals like the eliminative would like? Why now assert this as an intervening variable, or multiple entities - Occam's could would eliminate it? Why not just state that desires are brain states with the various neurochemicals etc?
@atheistobserver:What we have to establish is how evolution can tell the brain what things out there in the real world are good and bad.
Evolution does not tell the brain what is good and bad in the world, so there is nothing to establish. Organisms succeed via how they obtain states of affairs suitable for survival and replication and avoid states of affairs that prevent survival and replication, including providing heritable characteristics to their progeny. The brain is just one the means, an important one yes, but not an end.
@atheistobserver:Some things it gives us an instinctive fear of. Some things it teaches us through pain. Some things it rewards us through pleasure. You claim this is not all.
I assume by 'it' you mean evolution all these some are means of using the brain as a means to the success of the organism. Our motivational set might have these as prototypes but it is much broader - due to the expanded capacities of our brain - than collapsing this to these two features as a single dimension.
Post a Comment