Tuesday, January 09, 2007

Obligations towards Children: Happiness and Desire Fulfillment

For the last couple of days, my arguments have been about a parent or guardian’s moral responsibility towards children. I have argued that a parent, making decisions on a child’s behalf, should be governed by the principle of deciding for children what those children would decide for themselves if they were competent to do so.

In “Consent and Dignity: The Case of Ashley” I argued that Ashley’s parents were being good and responsible parents by making decisions for Ashley that will better fulfill Ashley’s own future desires.

In “Obese Children” I argued that the parents of obese children are abdicating their moral responsibilities to their children by giving those children desires and habits more likely to contribute to future misery.

There is another way that parents can fail their children – by giving those children desires that are impossible to fulfill.

To explain this fully, I wish to use an argument that both explains what desire fulfillment is, and argues against one of the most popular theories of value among atheists – the idea that value resides ultimately in happiness. It is a classic argument against happiness theory that asks about the value of life inside of an “experience machine.”

The Experience Machine

Congratulations on the birth of your new child. Of course, as good parents, you want your child to be happy. I have here a machine that will guarantee your child as much happiness as she can possibly have. We put your child inside of this machine and hook her up, then we run this computer program that will give your child the experience of living an ideal life.

While your child is lying in this chamber, she will be caused to believe that she is a princess growing up in a royal household. Do not worry about the possibility that she will not want to be a princess growing up in a kingdom; our machine will give her these desires as well. Our program has the subjects of this kingdom living calm and blissful lives in perfect awe and admiration of their most precious princess. When she grows up, our program will introduce her to a handsome prince, equally admired by all, who will win your daughter’s affection.

If you are worried that your daughter will be bored, and that this will lead to unhappiness, rest assured that we have taken care of that. The program will give your daughter challenges to overcome. She will even fail to overcome some of them – the smaller and less important ones. However, she will always succeed in overcoming the most important challenges. Of course, she will not know that she will succeed. We have discovered that we must introduce at least the fear of failure. However, these slight sorrows have been introduced only because they are necessary to bring about even greater happiness.

We have engineered our program so that once your daughter thinks she has reached the age of twenty –five, she will not age any further. She will, in fact, not know death. Of course, we can’t work miracles. Your daughter will eventually die. However, from your daughter’s point of view, she will no none of it. She will cease to have experiences without knowing that she has ceased to have experiences. In the mean time, you would have provided your daughter with as much happiness as her life could hold.

Refuting Happiness Theories of Value

Many readers, I suspect, would view the life of a person laying in a chamber being fed a program of imaginary success would still find something missing from such a life. Actually, if I imagine myself laying in a tube while some computer program tickled the relevant parts of my brain to produce ‘happiness’, I would rather be dead. I would already be as good as dead, for all such a life would be worth. Putting a child into such a situation, and requiring that she spend her whole life there, is the moral equivalent of killing that child.

This type of claim hardly counts as an argument. However, we would have an argument against the happiness theory of value and in favor of some alternative if we could find a theory that explains these and other sentiments.

The reason such a life has little value is because humans do not value happiness – or, at least, they value things other than happiness that an experience machine cannot provide.

Desire utilitarianism states that value exists as a relationship between states of affairs and desires, that desires are propositional attitudes, and an agent with a desire that ‘P’ for some proposition ‘P’ seeks to create or preserve states of affairs in which ‘P’ is true.

The problem with the experience machine – the reason it does not produce value, is that propositions that are the objects of our desires are not made or kept true by such a machine. We are made to believe that they are made or kept true, but our beliefs are mistaken. Our desires are being thwarted.

An experience machine cannot fulfill my desire to “make the world a better place than it would have otherwise been if I had not lived,” because the experience machine cannot make this proposition true. It can cause me to believe that I have made this proposition true (the purpose behind my writing this blog), but it cannot make the proposition true in fact. As such, it can give me happiness, but cannot create a state that has value to me.

This theory not only explains and predicts choices where people refuse to enter into such an experience machine, it would also explain and predict choices where people opt for such a machine. For example, a person who only desires happiness will have no reason to refuse entering the machine. In this case, the machine will make or keep true the propositions that are the objects of his desire – specifically, the proposition “I am happy.” It will tickle the parts of his brain in exactly the right way to produce this state called ‘happiness’ and, if that is what the agent wants, that is what he will receive.

Desire fulfillment theory defeats happiness theory is in its ability to explain both those who enter the machine and those who refuse. Happiness theory cannot explain those who refuse.

The happiness theory of value – perhaps the most popular theory among atheists who try to argue that morality is possible without God – is just plain wrong.

Religion as an Experience Machine

If the above argument is sound, then a parent’s duties to their children is not to provide them with happiness, but to help them to fulfill their desires. The experience machine is ruled out (in almost all cases) because the desires of the children (and the adult they become) are not fulfilled. Even if the individual comes to believe that his desires are or will be fulfilled, the life still has been robbed of most of its value – most of its meaning – because that which the child (and later adult) thought she had accomplished never happened, or never will happen.

Religion, in this context, is somewhat clumsy and crude version of the experience machine.

Many of the arguments in defense of religion these days – that it provides a person with comfort, that it helps them to avoid the suffering of loss, and that it provides the faithful with (an illusion of) meaning – are all claims consistent with making religion comparable to experience machines. It provides people with a set of desires that cannot be fulfilled and, like the experience machine, fills them with false beliefs that those desires are being fulfilled, in order to induce a psychological state of happiness.

This happiness is qualitatively no different than the happiness of a person, laying in a chamber, being fed stimuli that the brain turns into beliefs that she is a popular and beautiful princess about to marry a charming prince that will make her the envy of the entire kingdom.

In fact, her desire to be an admired princess cannot be fulfilled because there is no kingdom for her to be a princess of. Her desire to marry a charming prince is unfulfilled because the prince does not exist. No person’s desire to serve God can ever be fulfilled because there is no God to serve. Nobody can purchase a ticket for their friends and relatives to enter heaven because there is no heaven for them or their relatives to enter.

This is the message that I attempted to convey in an earlier posting called, “The Meaning of Life.”

The meaning that a religious person finds serving God is no different than the meaning that our ‘princess’ finds in becoming the fiancĂ© of the perfect (though imaginary) prince and the object of admiration for the fictitious citizens of a fictitious kingdom. The life of a religious person has meaning in the same way that the life of the woman lying in a chamber having her brain tickled by a computer program has meaning.

There may be an exception to this. If a person, because of their religion, acquires a desire to help real-world people deal with real-world problems, this desire to help real-world people deal with real-world problems can be fulfilled. People put into a religious “experience machine” are not zombies doing nothing. They are still agents who are acting, they still have the capacity to have desires relevant to the real world, and there is still the possibility that some of those desires are fulfilled.

However, while some in the religious experience machine may desire to help others (and actually do so), they may could still suffer from two problems. They could have bad ideas about what counts as “helping” – where the experience machine causes them to believe that something is helpful to others when it is actually harmful, or it could feed them desires to do harm to others “in the name of God”. The fact that people in a religious experience machine interacts with others (in ways that the girl in the fictitious experience machine mentioned above does not) does not automatically produce good consequences.


Anonymous said...

I think we disagree. The moral problem being presented here is one of changing a person's desires. If not that, then of fulfilling those desires in a false world.

We should not change the desires of a person that does not want them changed. However, where a person is a blank slate, there is no problem. If there were, then it would be immoral to create a person (say, an AI), with certain unnatural desires. And it isn't. I refuse arguments from nature, fate and the like as nonsense. Here the desire of the child is made to be happiness, and it appears that other relevant desires are removed. I see no moral problem.

As for the other desires - helping people, and so on. The central implication here is that a 'false' world is devoid of value. This I don't accept. If we discovered that this world of ours was a simulation, would our lives lose meaning? Mine wouldn't. Did anything change? It didn't. We exist at the level of relationships, concepts, and patterns, and these would be the same no matter what the substratum - simulated, physical, who cares?

You might assign greater value to the world that is responsible for the simulation of the one you exist in, and may have very valid desires that would be thwarted based on that. I think that this would be a rational error on your part.

For the little girl, changing her desires when she has none is not an immoral act. But let's say she desires to do good, have a happy family, or learn the behaviour of her universe as through physics. If the people there are indistinguishable from the people here, it's incorrect to say that doing good to them, or having a family with them has less value there than here. As for gaining knowledge, my initial decision would be that it is immoral to keep a being in the dark about a 'greater' world once they have solved the intellectual problems of theirs. But the 'desire for understanding' aspect is not as relevant to this discussion, and unfortunately few people place great value upon it to begin with.

I'd still like you to address the problem of the unrepentant guiltless righteous murderer, and how one could call such a person "evil", where "evil" is something greater than 'thwarted a desire to live', or 'will be punished by society', or 'I highly disapprove of such acts'. I don't think that it can be greater, and so the murderer's morality is no more or less valid than ours, just in a minority. My previous arguments might have better detail.

Aerik said...

The central implication here is that a 'false' world is devoid of value. This I don't accept. If we discovered that this world of ours was a simulation, would our lives lose meaning?

Here you commit a fallacy of equivocation: meaning -=- value. Not the same thing. They cannot be equivocated, especially here, first because ascribing value to a thing entails placing it on a finite, often discontinuous scale - most of the time it is perfectly reasonable to compare values analogically in quantitative terms. "Well if I had to put it on a scale of 1 to 10..." This is not so with meaning.

For the little girl, changing her desires when she has none is not an immoral act.

This is not true. One's desires can be self-chosen based off whatever information is available to them. Even so, your particular premise - that said girl is a blank slate - is in itself false, so your entire argument is invalid. The elasticity of the human brain (the sole cause and manifest of the human mind) is complicated and far-reaching, but by no means is the human brain a blank slate. The way we learn language as infants is entirely based off of figuring out whether expressions are head-first or head-last, and everything else falls into an arbitrary (even if systematic) place if and only if this one determination has occurred. Even so, Japanese is the only language known to be head-last, which shows a profound un-blank-slate-edness, wouldn't you think? And there are many more examples of brain plasticity having limits and 'pre-programmed' settings that make any argument concerning "blank slate" people completely irrelevant.

And here we have a conundrum. If n person's brain were a complete blank, a 'blank slate,' they would in fact have no way to grow. At all. How do you give freedom to nothingness, and how does it make choices or even absorb information? Hooey. What defines personage, sapience, is a certain level of awareness of one's surroundings and one's self in a cogent manner at some level. A blank slate is in fact not a person. Hell, you can't even say a blank slate has a brain, really.

So you must consider, M, that when you refer to a child or somebody with a child's mind as a "blank slate" you are in fact dehumanizing them.

Alonzo Fyfe said...


The problem being presented here is one in of putting a person in a state where his or her desires cannot be fulfilled - regardless of the origin of those desires.

Even if the person is provided with the happiness of (falsely) believing that those desires are being fulfilled.

I have explained how some people would, in fact, choose the experience machine - if they desired only happiness. So, the fact that you would find value in the experience machine raises no objection. The explanation also handles cases of those (like me) who would consider such a life to be a waste.

Indeed, if I were to discover that I was in such a machine, I would then treat my fellow humans the same way that I would treat the characters in a computer game. In fact, life would be nothing but a computer game. I may pretend that it is important that certain characters live or die - but, in the end, it does not matter. I get bored of computer games pretty quickly. I tend to think that I should be spending my time with real people than with fictional people.

However, different people have different desires, and make different choices based on those desires.

Whether changing desires is a moral or immoral act depends on what they are changed to. Changing a person's desires to make them crave the torture and suffering of others is an immoral act. Changing the desires of others to make them want to help others is a moral act.

Indeed, the whole point of the moral education of children is to promote desires that tend to fulfill other desires, and inhibit desires that tend to thwart the desires of others.

Even though a child's mind is not a blank slate, it is not completely write-protected either. It is malleable within limits.

Note: I wrote an answer to your "unreprentant guiltless righteious murderer" issue in the same post that you provided it a couple of hours after writing this article - in "Answering M On Subjectivism"

Anonymous said...

Aerik, it's not equivocation because I used the terms meaning and value interchangably, and in fact this interchangability has no true bearing on the argument. Replace one with the other, and perhaps ask what is meant before you accuse.

You miss the point of my blank slate argument. It is true that the newborn's brain has the potential built in - but then so does a newly fertilized egg. If you call that a human, then we won't be able to agree until we have that argument, and I don't want to have it. At what point does one become human? When we feel bodily pain? When we form abstractions? Who cares? The premise here, right or wrong, was that this baby is a blank slate in terms of desires and coherent thoughts. We could say that the doctors modify the desires in the test-tube, if you'd like. I even considered the alternative where the mind was not treated as a blank slate!

I was expecting a response along the lines of our responsibility to children, and how we should help them fulfill their inevitable desire of not wanting change, but I suppose I've made that a non-point by making the hypothetical scenario explicit.

Indeed, if I were to discover that I was in such a machine, I would then treat my fellow humans the same way that I would treat the characters in a computer game.

The experience machine of your example was sophisticated enough to be convincing. Here we can consider solipsism and Turing's test. If agents conjured by this machine can convince one of their humanity, and their desires, then they are no different from the people around me. Knowing that they are simulated doesn't change this.

My main point here was that you might be confusing two things: mistakenly believing that a desire is fulfilled, and fulfilling a 'simulated' desire. A person might desire to help a begger, and thus they give the begger change. But this does not help the begger, it makes things worse. We mistakenly believe that we did good, and knowing better we would not have done so. This is how you should approach religion. The other case is of the simulated begger who you help by feeding, warming, etc. You argue that because the substratum of that begger - who is nearly indistinguashable from a human in a lifetime of interaction - is silicon and logical gateways, that doing good to them has no value. It does. There's no difference between him and us because humans exist at the level of patterns and concepts and relationships.