I looked back on some of my posts for the last couple of weeks and came up with a couple of questions that I would like to answer. “Is any of this useful? Is it important?”
For example: What does it matter whether morality is altruism (so that the study of altruism can be claimed to be the same thing as the study of morality)? What does it matter that happiness theory fails the experience machine test and cannot account for the commensurability of values?
Do these issues have real significance?
They certainly do for some people.
For example, one of my objections was that the study of altruism does not even begin to handle moral concepts such as excuse, negligence, ‘ought’ implies ‘can’, or even ‘ought’. Yet, every day, countless people live or die by these moral concepts that the concept of ‘altruism’ simply does not have anything to say about.
In fact, it would not be far from the truth to say that my objection to claims that link morality with altruism is that it is altruism – particularly genetic altruism – that is not important. If we have genetic altruism, then we will behave altruistically, as our genes dictate.
If we lack genetic altruism, then . . . well, then what? Then we need some way to promote non-genetic or learned altruism. This, I argue, is what morality is all about. It has nothing at all to do with genetic altruism (which we either do or do not have), and everything to do with learned altruism that we can promote through social forces.
The study of genetic altruism – and, in particular, the tendency on the part of some to confuse genetic altruism with morality, diverts attention from a set of very important moral questions. This is, “How can we get people to behave better than they would otherwise behave?”
In order to question happiness theory, I used a thought experiment focused on a hypothetical scientific invention – a Matrix, or a permanent Holodeck (from Star Trek), or an Experience Machine, to generate false beliefs that a person had acquired a desired end. However, is it at all important that a theory cannot account for choices that we might make in a science-fiction universe? How does that impact the choices we make in the real universe?
The basic answer is this: If you want to help people get what they want, and if you wish to stay out of their way, it would be useful for you to know what they really want. What a person really wants, when he has a desire that ‘P’, is for a state of affairs to exist where ‘P’ is true.
The real-world counterpart of the experience machine is the lie. Experience machines are the ultimate lie generators. Hypothetically, they manufacture perfectly convincing lies that the agent is in a state that she desires. However, the lie has no value. It is only the genuine state of affairs – a state of affairs in which ‘P’ is true (for anybody with a desire that ‘P’) that has value. Anybody who thinks that they are generating value with a lie is simply mistaken.
Imagine the case of a person who is happy thinking that her (dead) child is now with God who will look after her. This is a lie. Offering a person this lie is simply like telling a person, “I have a computer program here that will feed you impressions that your child is still alive. Enter the experience machine, and you will never know that your child has died.”
Some people might accept such an offer. However, what type of people are they?
They are people whose real concern is with their own happiness or pleasure. It cannot be “a desire that my child is still alive” that causes her to enter the machine – because the machine cannot make that proposition true. It can only be “a desire that I experience pleasure and avoid pain” that motivates such a choice – an essentially selfish desire.
The same is true of the parent who seeks to believe that their child is in heaven. It cannot be “a desire that my child is conscious and happy” that motivates accepting such a claim. It is only “a desire that I experience pleasure and avoid pain” that motivates a person to believe the story of an afterlife.
Indeed, the heaven story caters to selfish desires (personal comfort) and inhibits altruistic desires (saving lives) by trivializing actions that save lives (you are only keeping good people out of heaven and evil people from just punishment) for the sake of maintaining a myth that provides personal comfort.
Yes, much religion is selfish, in the same way that entering the experience machine is selfish, because they both allow a person to obtain personal pleasure by pretending to help others. They simply are not going to be seen as attractive options by people who care less about their own personal pleasure and more about actually helping people.
These accusations are not true of all religion. The term ‘religion’ encompasses a wide variety of beliefs, some of which cannot be easily compared to the concept of an experience machine. Yet, the fact that some religions cannot be compared in this way is no defense of those that can be.
So far, I have talked about a person who is given a choice to enter an experience machine. Such a person has to have a desire for personal pleasure or happiness, but no genuine concern for the welfare of others. He may be somebody who likes to see himself as somebody who helps others, but not as somebody who actually likes to help others. The experience machine can fulfill the first desire by filling the agent’s brain with false beliefs about his own charity. It cannot fulfill the second desire.
The person who is honestly selling seats in an experience machine will only attract customers that are selfish. However, it is also possible for the person selling positions in the experience machine to attract people with a genuine concern for others. All they have to do is lie (or, at least, make claims that are not true). They can attract these customers by making claims like, “Enter the experience machine. We will hook it up. However, rest assured, the benefits you will create are real.” The agent then enters the experience machine, where she is then fed all sorts of programmed computer images of people in trouble. Only, there are no people.
This result is even more tragic. The agent believes that she is doing something important – helping others. Only, she is doing nothing. The agent is told that she has a daughter who she is raising to become a self-sufficient adult who is a productive member of society, only the child does not exist. While the agent thinks she is doing great deeds, her body atrophies in a bath of warm glop that is keeping her alive.
We can add an additional change to compound the tragedy here. Let us hook up the experience machine so that, every time the agent thinks she has done good, and she walks away with a smile on her face and her heart full of pride, the machine inflicts suffering and, in some cases, death on others. Every time the machine feeds the sensations of having fed a village full of children having their first meal in days or their own source of drinking water, it actually tortures and poisons that number of children.
In this example, we have an agent who wants to do good – who finds meaning and purpose in being an agent of positive change. She proudly believes that she does good things. However, the machine tricks her, giving her beliefs that are false, while it turns her into an agent producing great harm, suffering, and death. By the time she dies of old age, the universe would have been a much better place, if only she had not existed.
This describes the situation for many who have entered into a religion that follows the model of the experience machine. The experience machine feeds them false beliefs that they do well. While, in opposing homosexual marriage, early-term abortions, embryonic stem-cell research, the education of women, planning for a distant future the religion says will not exist, denying women the right to vote, or in encouraging its members to become suicide bombers, or help the Bush Administration establish a system where the President can round up, arrest, and indefinitely hold people virtually at will.
Some of these people certainly are self-centered individuals who lack a desire to do good, but only have a desire to see themselves as people who do good. Yet, there are almost certainly countless others who desire to do good, but who the experience machine itself has made the unwitting agents of great evil.
However, if you will pardon one more observation . . . those good people who get seduced into hooking themselves into an experience machine who harms others . . . if they truly wanted to make sure that they were providing real (real-world) benefits and not doing harm . . . they would have an interest in double-checking the claims of those who claim that they can do good from inside the experience machine. That is, if they really cared.
Are the ideas that I have been defending in these recent posts on Dawkins, Harris, and evaluating moral theories important? Are they useful?
I think that there is some merit in reminding people that we can make the world a better place by promoting desires that tend to fulfill other desires, and inhibiting desires that tend to thwart other desires, where we can. The study of genetic altruism might be interesting to some, but it is not all that useful.
They yield some important conclusions that are useful in determining whether a life has value, meaning, and importance. Only a self-centered person can intentionally choose a life of self-deception and false beliefs. It has to be somebody who prefers the illusion of being helpful to others and who cares little about the fact of the matter.
A person with a genuine interest in helping others could not possibly choose a life inside an experience machine, because the person inside the experience machine generates no real benefit for others. He only generates pleasure and happiness for himself.
Worse, many people who live in the experience machine of religion are not only failing to benefit others; they do genuine harm. The experience machine makes them think that they are doing good deeds. Yet, in fact, many of the “good deeds” that they perform end up being the cause of great quantities of harm, suffering, and death that the machine does not let them see.
In fact, 'happiness' theory says that says that there is no reason to refuse to enter an experience machine, also says that there is no reason to remove somebody. After all, they are happy. However, desire utilitarianism says that there are reasons to remove people from an experience machine. A person who has a desire to help others, for example, can only pretend to help others (and might be harming others) from within the machine. The only way he can actually help others is here in the real world.
These, I would argue, are important and useful findings.