Last week I wrote some objections to the theory that happiness is the sole ultimate value and the sole concern of ethics. Today, I want to add a couple more objections to happiness theory. However, I first want to spend a few sentences on the question, “Why does it matter?”
Is there anything that you want? I hold that morality is concerned with how best to help people get what they want. The lack of concern with morality prevents people from getting what they want, while the strong moral institutions help people to get what they want.
Of course, we cannot have an institution devoted to helping people get what they want until we understand what it is that people want. One theory suggests that people only want happiness. If this is true, then it is a fine basis for morality. If not, then the pursuit of happiness as the sole good, when it is not the only thing that people want, means that a lot of other wants will get left behind.
I hold that happiness theory is a mistake.
As it turns out, a couple of days after I posted my last article on the subject, “Happiness vs. Desire Fulfillment”, Ebonmuse at Daylight Atheism http://www.daylightatheism.org/ added a posting called, “The Roots of Morality II: The Foundation,” that said,
No matter what quality anyone proposes as the root of morality, it is always possible to ask why we should value that quality and not some other - except for one. There is only one quality that is immune to this question and that therefore can truly serve as the foundation of morality, and that quality is happiness.
This is precisely the thesis that I am arguing against.
Last week I made the following claims:
(1) There is no more reason to feel compelled to adopt the position that there is one basic desire (e.g., a desire for happiness) and that all other desires are a manifestation of this, than to adopt the position that there is one basic belief (e.g., a belief in God) and all other beliefs are manifestations of this.
(2) Using a story of a prisoner who can obtain happiness only by sacrificing her child, I argued that happiness theory cannot adequately explain the choices that people make.
(3) Happiness theory cannot explain how two people with identical beliefs will perform different actions without introducing a mysterious “third variable” (belief, desire for happiness, and ‘something else’) that makes happiness theory inadequate.
This week I would like to add two more arguments:
(4) The Experience Machine Problem.
The experience machine problem involves cases in which a person is given a choice between living in the real world with its uncertainties and entering an experience machine that will give her the impressions of living in the real world under ideal circumstances.
The experience machine is designed to read the agent's thoughts and to feed them those experiences that would make them as happy as possible. If the agent gets too concerned that everything is too easy, it will feed the agent experiences of difficulties so that the agent’s maximum happiness is maintained.
Many people presented with such a choice report that they would prefer to live in the real world. However, happiness theory cannot explain this preference since, Ex hypothesi, the experience machine will produce more happiness than the real world.
One cannot avoid this conclusion by stating that the agent in the machine is not “truly happy.” There is no qualitative difference between the happiness that the person will experience in the machine and the happiness of identical events happening in the real world – not without adding some really bizarre elements to the ‘happiness’ that Occam’s Razor would certainly threaten.
People say that they would not enter into the machine even if the experiences were guaranteed to be indistinguishable from real-world happiness. Nor do they express any longing for such a machine in the sense of saying, "Wouldn't it be great if such machines really could exist?" All of this suggests that people seek values besides happiness – values that sometimes outweigh their desire for happiness – and, in some cases, a fake experience that produces happiness has no value at all.
Desire fulfillment theory has no trouble handling the experience machine. Desire fulfillment theory says that we act so as to make true the propositions that are the objects of our desires. The experience machine has absolutely no ability to make true the objects of most of our desires. Consequently, desire fulfillment theory suggests that the happiness of the experience machine will sometimes (often?) have no value for agents.
The machine can certainly fulfill my desire for happiness. In fact, by removing the frustrations and the pains of the real world, I am quite convinced that I could be very happy in the machine. However, I have desires that the machine cannot fulfill. It cannot fulfill my desire to leave the world better than it would have otherwise been – because I would be locked in a machine accomplishing nothing. To fulfill that desire, I have to be a part of the real world. I can't waste my time being locked in a machine, no matter how happy the machine would make me.
I want to quickly point out that 'desire fulfillment' theory does not regard 'desire fulfillment' as a sensation or any other specific entity. It is merely a term used to describe a relationship between a 'desire that 'P'' and a state of affairs in which 'P' is true. That is it. There is nothing more.
I have often faced critics who attempt to argue that 'desire fulfillment' has the same problem, because the experience machine can provide the sensation of 'desire fulfillment.' The machine cannot provide 'desire fulfillment.' Only a state of affairs in which 'P' is true can fulfill a desire that 'P'. There are some desires (e.g., a desire for happiness) that the machine can full. For that, it may be tempting. However, there are other desires that the machine cannot fulfill. Thee are, then, some people who would have no interest in such a machine.
5. The Incommensurability of Values
The incommensurability of values concerns the ability that one value has to substitute for another.
Money is an example of a commensurable value. Assume that an investor has two mutually incompatible options. Option 1 will pay a 10% rate of return in 1 year; Option 2 will pay a 9% rate of return. Assume that the risk profiles are identical. The agent has every reason to go with the Option 1. More importantly, the agent has no reason to regret or even give a second thought to the fact that he did not choose Option 2. It is an easy choice that requires absolutely no agony – and over which the agent would have no regrets.
Much of our decision-making is not like this. Real-life decision making is not like that. A person faces two career options. He could study moral philosophy and try to live his life as an ethicist, or he could study planetary astronomy and engineering and try to get a job in the unmanned space program. His interest in making the world a better place is slightly stronger than his interest in being a part of the unmanned space program. So, he invests his energies in the study of moral philosophy. However, the loss of the opportunity to be a part of the unmanned space program still carries its regrets. There is a hint of loss sitting in the background.
This sense of loss is an indicator that we are dealing with incommensurable values. One value may outweigh another, but it does not substitute for the other. It is not a choice between, "the same" and "more of the same." It is a choice between two distinctly different options.
Happiness theory attempts to reduce all value to one concern – happiness. It claims that all human choices are made between two options; 'happiness' and 'more happiness'. If this is an accurate description of the situation, it seems to lack an explanation for the fact that the person who chooses 'more happiness' over 'happiness' should have any regrets for the 'happiness' that he did not get.
However, desire-fulfillment theory also handles this phenomenon. The agent has two desires – a desire that 'P' and a desire that 'Q' where it is not causally possible that 'P' and 'Q'. We may assume that the desire that 'P' is slightly stronger than the desire that 'Q'. Therefore, the agent acts so as to make 'P' true.
However, 'P' is not 'Q'. The fulfillment of the desire that ‘P’ still leaves the desire that ‘Q” unfulfilled.
Because $100 is commensurate with $200, the desire for $100 is fulfilled in a state where the agent get $200. There is no sense of loss because there is no loss.
If an agent’s choice is between 200 units of happiness versus 100 units of happiness, the desire for 100 units of happiness is fulfilled in a state where the agent has 200 units of happiness.
However, the fulfillment of the desire that 'P' leaves the desire that 'Q' unfulfilled, Desire fulfillment theory predicts and explains a sense of loss that happiness theory does not account for.
Conclusion
We have two theories; happiness theory and desire-fulfillment theory. Of these two, desire-fulfillment does a better job of explaining and predicting a large set of events that focus on human choice. That gives us reason to reject happiness theory and accept desire fulfillment theory in its place.
Ebonmuse also wrote,
In addition, there is a strong, purely practical reason to create a moral system that encourages individuals to contribute to the happiness of others, rather than the opposite. Namely, if your happiness is obtained in a way that makes other people unhappy, they will always oppose you and work to hinder your goals. On the other hand, if your happiness is derived wholly or partially from other peoples' happiness, they will be far more likely to assist you, since their goals aligning with yours, and you will be more likely to achieve your own ends and be happy as well.
Please note the few references of 'goals' and 'ends' in this quote. The idea that happiness is the only value suggests that Ebonmuse should be talking about a single ‘goal’ or ‘end’ (happiness) rather than use the plural. Though desire –fulfillment theory suggests multiple goals and ends.
I suggest that Ebonmuse needs this talk of ‘goals’ and ‘ends’ and run with it – discarding all references to ‘happiness.’ This would yield something like,
In addition, there is a strong, purely practical reason to create a moral system that encourages individuals to acquire desires that tend to fulfill the desires of others, rather than the opposite. Namely, if your desires are fulfilled in a way that thwarts the desires of others, they will always oppose you and work to hinder your goals. On the other hand, if your desires are fulfilled wholly or partially from the fulfillment of the desires of other people, they will be far more likely to assist you, since their goals aligning with yours, and you will be more likely to achieve your own ends.
This also exposes another consideration. We also have reason to use the tools of social conditioning at our disposal to promote in others those desires which tend to fulfill the desires of others, because we are the 'others' whose desires become more likely to be fulfilled. If we have an aversion to being killed, then we have reason to cause others to have an aversion to killing – because the others they will then not kill include us.
At the same time, they have reason to cause in us an aversion to killing, a love of truth and honesty, and an aversion to taking that which does not belong to us.
Through the institution of morality, we promote this aversion to killing, deception, and theft by the widespread use of social tools such as condemnation. Hopefully, this will reduce the number of people with these desire-thwarting desires, and reduce the strength of these desire-thwarting desires where they do exist.
7 comments:
Hi there Alonzo,
"There is no qualitative difference between the happiness that the person will experience in the machine and the happiness of identical events happening in the real world – not without adding some really bizarre elements to the ‘happiness’ that Occam’s Razor would certainly threaten."
I disagree strongly with this. There is a difference, and no "bizarre" elements need be added. Rather, the difference consists in a very ordinary and commonplace desire: the desire for genuine achievement. It is not at all hard to understand why a person who has this desire and cannot fulfill it would lack a very significant component of true happiness.
You yourself point out another important component of happiness that this machine by its nature could not possibly provide:
"However, I have desires that the machine cannot fulfill. It cannot fulfill my desire to leave the world better than it would have otherwise been – because I would be locked in a machine accomplishing nothing. To fulfill that desire, I have to be a part of the real world."
Your thought experiment is only fatal to happiness-based theories of morality if one takes the view that happiness consists solely of satisfying one's own desire for sensory pleasure. I do not take such a view. In fact, I think anyone who seeks happiness solely through such shallow routes will inevitably end up unhappy. On the contrary, it is entirely consonant with universal utilitarianism to posit that true happiness consists in fulfilling these important and meaningful desires to make a difference.
I am afraid that your concept of 'genuine achievement' will yield strange results.
For example, if a person must have 'genuine achievement' to be happy, and she has a goal of raising a healthy and happy child, then your account would yield the result that she is made happier by learning that her child had been killed. Until she gets the news, she has the false belief that her child is healthy and happy. This is not a 'genuine achievement' which, in your case, does not allow for happiness. When she learns of her child's death, she is now properly connected to her genuine achievements.
This is not to say that she is happy. However, the claim that false beliefs cannot generate happiness suggests that she is happier (less unhappy) with the true belief of her child's death than in ignorance of her child's death.
If this is the definition you are using, then it is, at the very least, a very strange concept of happiness that would have the smiling and cheerful person less happy than the person in tears.
Oh, and I do hold that 'happiness' is understood as an emotional state. Not, as you say, one necessarily linked to pleasure, but not one that is linked to true belief either.
A belief that one's actions have benefitted others may be a key to her happiness. It does not have to depend on physical pleasure.
Yet, the belief alone generates happiness -- and will generate happiness even if the belief is false.
On the other hand, a desire cannot be fulfilled if the proposition that is the object of the desire is false. Such a person is happy, but 'I was not after happiness. What I wanted to help others. Yes, thinking that I helped others would make me happy, but the happiness is the icing on the cake -- not the goal. Helping others is the goal.'
Good points.
I'd note though, that while many people would reject the stark choice of an Experience Machine, one could argue that modern entertainment consists of the gradual construction of such a machine. People might ease into VR where they wouldn't leap into it.
It seems to me that the choice of whether or not to get in the Experience Machine is not so easy -- more or less so for different people.
In the movie, "The Matrix", Neo has to make an important choice to do just that. Later in the movie, another character changes his mind, and opts to get back in the machine.
For me, personally, I know this would not be an easy decision. I'd have to have a lot more information about the machine, and think about how it would affect other people.
Kip wrote:
It seems to me that the choice of whether or not to get in the Experience Machine is not so easy -- more or less so for different people.
Desire utiltiarianism states that it will not be easy for some people. Some will easily choose the enter the machine. Some would find no value in it.
A person will act to make true a proposition that is the object of a desire.
If a person desires to be happy, or desires comfort, or is satisfied by playing the role of somebody who is admired by others, then an experience machine can fulfill that desire.
If a person desires to help others, to see their child graduate from college, to rescue children from disease, the experience machine will have no value.
The person who says that all humans value nothing but happiness cannot explain why some people will choose to avoid the machine.
Desire utilitarianism does not predict that everybody will avoid the machine, only that some people will avoid it, and some will not, and some will have trouble deciding.
Furthermore, it says that the difference between these groups can be found in the number and strength of the desires that the experience machine can make or keep true.
Thanks, and I agree.
I wonder, though, would someone with good desires get in the machine, avoid the machine, or have trouble deciding?
Post a Comment