Wednesday, May 02, 2007

Important? Useful?

I looked back on some of my posts for the last couple of weeks and came up with a couple of questions that I would like to answer. “Is any of this useful? Is it important?”

For example: What does it matter whether morality is altruism (so that the study of altruism can be claimed to be the same thing as the study of morality)? What does it matter that happiness theory fails the experience machine test and cannot account for the commensurability of values?

Do these issues have real significance?

They certainly do for some people.

Genetic Altruism

For example, one of my objections was that the study of altruism does not even begin to handle moral concepts such as excuse, negligence, ‘ought’ implies ‘can’, or even ‘ought’. Yet, every day, countless people live or die by these moral concepts that the concept of ‘altruism’ simply does not have anything to say about.

In fact, it would not be far from the truth to say that my objection to claims that link morality with altruism is that it is altruism – particularly genetic altruism – that is not important. If we have genetic altruism, then we will behave altruistically, as our genes dictate.

If we lack genetic altruism, then . . . well, then what? Then we need some way to promote non-genetic or learned altruism. This, I argue, is what morality is all about. It has nothing at all to do with genetic altruism (which we either do or do not have), and everything to do with learned altruism that we can promote through social forces.

The study of genetic altruism – and, in particular, the tendency on the part of some to confuse genetic altruism with morality, diverts attention from a set of very important moral questions. This is, “How can we get people to behave better than they would otherwise behave?”

Happiness Theory

In order to question happiness theory, I used a thought experiment focused on a hypothetical scientific invention – a Matrix, or a permanent Holodeck (from Star Trek), or an Experience Machine, to generate false beliefs that a person had acquired a desired end. However, is it at all important that a theory cannot account for choices that we might make in a science-fiction universe? How does that impact the choices we make in the real universe?

The basic answer is this: If you want to help people get what they want, and if you wish to stay out of their way, it would be useful for you to know what they really want. What a person really wants, when he has a desire that ‘P’, is for a state of affairs to exist where ‘P’ is true.

The real-world counterpart of the experience machine is the lie. Experience machines are the ultimate lie generators. Hypothetically, they manufacture perfectly convincing lies that the agent is in a state that she desires. However, the lie has no value. It is only the genuine state of affairs – a state of affairs in which ‘P’ is true (for anybody with a desire that ‘P’) that has value. Anybody who thinks that they are generating value with a lie is simply mistaken.

Imagine the case of a person who is happy thinking that her (dead) child is now with God who will look after her. This is a lie. Offering a person this lie is simply like telling a person, “I have a computer program here that will feed you impressions that your child is still alive. Enter the experience machine, and you will never know that your child has died.”

Some people might accept such an offer. However, what type of people are they?

They are people whose real concern is with their own happiness or pleasure. It cannot be “a desire that my child is still alive” that causes her to enter the machine – because the machine cannot make that proposition true. It can only be “a desire that I experience pleasure and avoid pain” that motivates such a choice – an essentially selfish desire.

The same is true of the parent who seeks to believe that their child is in heaven. It cannot be “a desire that my child is conscious and happy” that motivates accepting such a claim. It is only “a desire that I experience pleasure and avoid pain” that motivates a person to believe the story of an afterlife.

Indeed, the heaven story caters to selfish desires (personal comfort) and inhibits altruistic desires (saving lives) by trivializing actions that save lives (you are only keeping good people out of heaven and evil people from just punishment) for the sake of maintaining a myth that provides personal comfort.

Yes, much religion is selfish, in the same way that entering the experience machine is selfish, because they both allow a person to obtain personal pleasure by pretending to help others. They simply are not going to be seen as attractive options by people who care less about their own personal pleasure and more about actually helping people.

These accusations are not true of all religion. The term ‘religion’ encompasses a wide variety of beliefs, some of which cannot be easily compared to the concept of an experience machine. Yet, the fact that some religions cannot be compared in this way is no defense of those that can be.

Unwitting Participation

So far, I have talked about a person who is given a choice to enter an experience machine. Such a person has to have a desire for personal pleasure or happiness, but no genuine concern for the welfare of others. He may be somebody who likes to see himself as somebody who helps others, but not as somebody who actually likes to help others. The experience machine can fulfill the first desire by filling the agent’s brain with false beliefs about his own charity. It cannot fulfill the second desire.

The person who is honestly selling seats in an experience machine will only attract customers that are selfish. However, it is also possible for the person selling positions in the experience machine to attract people with a genuine concern for others. All they have to do is lie (or, at least, make claims that are not true). They can attract these customers by making claims like, “Enter the experience machine. We will hook it up. However, rest assured, the benefits you will create are real.” The agent then enters the experience machine, where she is then fed all sorts of programmed computer images of people in trouble. Only, there are no people.

This result is even more tragic. The agent believes that she is doing something important – helping others. Only, she is doing nothing. The agent is told that she has a daughter who she is raising to become a self-sufficient adult who is a productive member of society, only the child does not exist. While the agent thinks she is doing great deeds, her body atrophies in a bath of warm glop that is keeping her alive.

Harming Others

We can add an additional change to compound the tragedy here. Let us hook up the experience machine so that, every time the agent thinks she has done good, and she walks away with a smile on her face and her heart full of pride, the machine inflicts suffering and, in some cases, death on others. Every time the machine feeds the sensations of having fed a village full of children having their first meal in days or their own source of drinking water, it actually tortures and poisons that number of children.

In this example, we have an agent who wants to do good – who finds meaning and purpose in being an agent of positive change. She proudly believes that she does good things. However, the machine tricks her, giving her beliefs that are false, while it turns her into an agent producing great harm, suffering, and death. By the time she dies of old age, the universe would have been a much better place, if only she had not existed.

This describes the situation for many who have entered into a religion that follows the model of the experience machine. The experience machine feeds them false beliefs that they do well. While, in opposing homosexual marriage, early-term abortions, embryonic stem-cell research, the education of women, planning for a distant future the religion says will not exist, denying women the right to vote, or in encouraging its members to become suicide bombers, or help the Bush Administration establish a system where the President can round up, arrest, and indefinitely hold people virtually at will.

Some of these people certainly are self-centered individuals who lack a desire to do good, but only have a desire to see themselves as people who do good. Yet, there are almost certainly countless others who desire to do good, but who the experience machine itself has made the unwitting agents of great evil.

However, if you will pardon one more observation . . . those good people who get seduced into hooking themselves into an experience machine who harms others . . . if they truly wanted to make sure that they were providing real (real-world) benefits and not doing harm . . . they would have an interest in double-checking the claims of those who claim that they can do good from inside the experience machine. That is, if they really cared.


Are the ideas that I have been defending in these recent posts on Dawkins, Harris, and evaluating moral theories important? Are they useful?

I think that there is some merit in reminding people that we can make the world a better place by promoting desires that tend to fulfill other desires, and inhibiting desires that tend to thwart other desires, where we can. The study of genetic altruism might be interesting to some, but it is not all that useful.

They yield some important conclusions that are useful in determining whether a life has value, meaning, and importance. Only a self-centered person can intentionally choose a life of self-deception and false beliefs. It has to be somebody who prefers the illusion of being helpful to others and who cares little about the fact of the matter.

A person with a genuine interest in helping others could not possibly choose a life inside an experience machine, because the person inside the experience machine generates no real benefit for others. He only generates pleasure and happiness for himself.

Worse, many people who live in the experience machine of religion are not only failing to benefit others; they do genuine harm. The experience machine makes them think that they are doing good deeds. Yet, in fact, many of the “good deeds” that they perform end up being the cause of great quantities of harm, suffering, and death that the machine does not let them see.

In fact, 'happiness' theory says that says that there is no reason to refuse to enter an experience machine, also says that there is no reason to remove somebody. After all, they are happy. However, desire utilitarianism says that there are reasons to remove people from an experience machine. A person who has a desire to help others, for example, can only pretend to help others (and might be harming others) from within the machine. The only way he can actually help others is here in the real world.

These, I would argue, are important and useful findings.


STD said...

I just hope we can turn the tide around. Do you have hope for humanity? I'm not sure if I do at the moment.

Alonzo Fyfe said...


I do not know what to expect.

I look at history and I see that huge shocks are possible. World wars, the great depression, the Civil War, Crusades, inquisitions, The Black Death, the Revolutionary War, the Dark Ages . . .

There has scarcely been a generation that was born and died in peace and security.

So far, I have enjoyed such a life. Though I lived through the threats of the cold war and terrorist attacks, none have materialized in my lifetime of a magnitude of these earlier disasters.

Clearly, the human race has the capacity to endure great shocks.

In the mean time, the best that we can do is to use the relative peace that we enjoy today to try to build a secure foundation for the future.

Thus, I write this blog.

Anonymous said...

Call me the dumb athiest that reads your blog if you will but I'd like to see you work on shortening up your posts.

After your first paragraph I tune out immediately and I feel like I am missing a lot by doing so. Don't get me wrong I enjoy reading in general and a 1000 page novel in a week is a normal occurrence for me. Could you help us ADHD individuals by working toward brevity in at least a couple of your posts once in a while?

Alonzo Fyfe said...


I will not call you a dumb atheist, simply because you do not have time to read my blog. You have only so many hours in a day, and I have no right to demand even one second of that time. I consider any second spent on my blog to be a gift, for which I am grateful.

I am aware of the costs of long blogs. For a while, I attempted to shorten them, assigning myself a limit of 750 words (about half of my average post).

I discovered that this gave me little space to do more than to describe a sitution and to either cheer or curse the result. I had no space to give my reasons for my view.

Where I did give reasons, I could often only complete my argument in the space allowed by leaving out premises, or refusing to address objections. This inevitably brought comments to the effect of, "Your argument does not make any sense unless you add this premise," or "You did not consider this objection."

In other words, shorter posts are perfect for a 'news' blog (a blog that reports the news and then gives a short subjective commentary), but is not sufficient for developing an argument for or against a particular conclusion.

So, I went back to the longer posts.

I am aware of the costs. One of these days, I may try again. However, at this point, I must say that I simply do not know how to present a thorough discussion of a topic in less space.

If anybody wants to try to distill my points into smaller pieces, I would appreciate it.

Anonymous said...

I am puzzled by your concern with separating morality from altruism, and inability to see the significance of genetic components in altruism. First altruism, as the concern to help others and not cause harm to others, seems to me a fairly practical application of the concept that good desires are desires that fulfill other desires and bad desires are those that thwart other desires. If there is no value to altruism, what is our motivation to call “fulfilling other desires” good? Does altruism alone constitute a completely comprehensive moral theory? Perhaps not. But does having desires to help people when you can, avoid hurting them when you can, as well as following principles such as truth and fairness, which result in the most help and least hurt for most people not meet most of your criteria for morality?
The significance of genetic altruism comes in at least two areas. First it refutes the fallacy that we’re genetically only selfish, so altruism must come from God. Second, it gives us reasons to believe altruistic behavior can be rewarding for its own sake. We may have some genetic wiring to compel us to act in certain ways, but perhaps more important, we may have internal reward systems that make us feel good when we help others.
If we’re interested in encouraging good behavior, it may be useful to know if we need to reprogram fundamentally selfish creatures, or if we need to reinforce natural tendencies that already exist. Your moral theory may be only concerned with relationships of propositions P and states S, but the real world includes humans H and behaviors B that really do matter as well.

Anonymous said...

You say: "If anybody wants to try to distill my points into smaller pieces, I would appreciate it."

I've been thinking I should do this and submit one column per week of yours (edited to be more accessible to the public) to the local newspaper until they decide to hire you. Maybe I should set up a ghost blog of "popularized" versions of your posts.

Sounds like you have no objections. Now I just need the time and will to do it.

Alonzo Fyfe said...

atheist observer

There is a difference between seeing the significance of something and seeing its moral significance.

Gravity is an extremely significant force. Yet, it is not a moral commandmant that, "thou shalt accelerate towards another body at a rate proportional to the mass of the two bodies and inversely proportional to the square of the distance."

Genetic altruism has the same moral significance as gravity. It may have a great deal of value - but it is not moral value. To have moral value, we must be dealing with components of choice that can be culturally influenced.

If a person falls off (quite by accident) and happens to land on an individual in the process or robbing a pair of tourists, he gets no moral credit for stopping the attack. Indeed, his action isn't even (in the moral sense) 'altruistic'. It was just an accident.

Similarly, 'genetic altruism' isn't even altruism in the moral sense. It is only 'altruism' in the biological sense. A tree may fall in the jungle providing vital sunlight to the plants below, but this is not a case of altruism. An ant may go out, find some dead insect, and drag it back to the colony, but thisis not moral altruism. A woman may pick up and succle her young. But, if she does so for no reason other than a biological urge to do so, this is not altruism either. It only mimics altruism.

As for your final statement, "Your moral theory may be only concerned with relationships of propositions P and states S, but the real world includes humans H and behaviors B that really do matter as well."

Please note, humans and behaviors are parts of a great many of these propositions P and states S. When they play those parts, and the parts they play, determine when and how they matter.