Thursday, May 10, 2007

Brain Science and Morality

Egads. I have too many themes going on at the same time.

Today’s post, however, is of central importance to two of those themes.

It is relevant to the genetic morality theme covered recently in:

Richard Dawkins; Morality and the Selfish Gene

Evaluating Moral Theories

The Genetic Morality Delusion

It is also relevant to the Morality and Mental States theme covered recent in:

Sam Harris: Moral Irresponsibility

Morality and Mental States

My point in this post is to deny a common assumption about a relationship between morality and brain science.

So, let’s assume you are going to conduct an experiment. You are going to un a series of tests on a series of patients. This test involves hooking the patients up to various brain-scanning machines. Then, you are going to give the patient a complex math problem to solve. Let’s say, you are going to ask them to multiply two three-digit numbers together (e.g., 847 * 446). You give them the assignment, they go through the process, they put their number on a note card, and the experiment is over.

After you have collected all of this data, there is still one question that you will not be able to answer by looking ONLY at the brain scans. Did the subjects get the right answer?

Clearly, it is not the case that if the agent got a wrong answer that your machines will come back with no data. Your machines are going to give you data regardless of whether the answer is right or wrong.

You might even be able to discover a pattern that allows you to distinguish right answers from wrong answers just by looking at the scans. Perhaps a particular blip on one of the instruments becomes reliably correlated with a right answer such that all those who show the blip get the right answer, and those who do not make a mistake.

Yet, even this requires that you have the ability to acquire, through some method other than brain scans, an idea of what the right answer is, so that you can make the correlation to start with.

In other words, all of the brain science in the world will not give you a theory of right answers. It will only give you a theory of how people come up with their answers, independent of whether those answers are right or wrong.

For some reason, on the issue of morality, a lot of people think that you can hook a person up to a bunch of brain-monitoring machines, give them a moral question, and that the measurements will tell you how morality is done. Those brain scans will tell you how the subject got the answer that she did. However, one thing that the brain scans will never be able to answer is the question of whether the answer she came up with, using whatever method she used, is the right answer. To do that, you need an independent theory of right moral answers.

The first objection that this model would encounter is the claim that there are no correct moral answers. Morality is nothing more than the judgment that the person comes up with when she is asked the question.

If there are no moral right answers, then whenever an agent judges something gas right or wrong, then her judgment is in error. Sheh is looking at X and saying, “X is right”. However, the doctor is saying that “X is right” is false. This patient, in seeing the rightness of X, is seeing something that is not really there. She is simply delusional – as are all people who perceive things as right or wrong.

One could say that these delusions are an innate part of how we are wired – and that evolution has made us this way. In fact, evolution has almost certainly made us disposed to perceive things that are not there. Scientists can identify and replicate a number of optical illusions as well. Yet, one inescapable fact about optical illusions is that they are illusions – reality is different from how we perceive it. If we act on these illusions, then we are acting on a mistake.

If we have a faculty for perceiving things as moral or immoral, and there is no such thing as something being moral or immoral in fact, then these perceptions are nothing other than moral illusions. They are mirages manufactured in the brain that gives us a false impression of the real world.

We may be polite and say that a moral illusion is “true for” the subject who perceives it. That is to say, the subject genuinely perceives the object of evaluation as being moral or immoral. Yet, this is still true in exactly the same case as the appearance of a mirage is “true for” the person who sees it. It is still a mirage. It is still a distorted and false view of morality.

As it turns out, these distorted and false views of morality are precisely those that people use to punish (fine, imprison, and execute) other people. These moral illusions are not simply some harmless tricks of the mind. People refer to these all the time as justification for laws that determine who lives and who dies, who lives free and who is imprisoned. If they are, indeed, illusions, then basing decisions on whether to harm others on these illusions appears to compound the mistake.

That is, unless there is something to be said about an object of evaluation being moral or immoral in fact. Then (and only then) do we have the power to look at our moral perceptions and discover which of them perceive moral value correctly, and which are mistaken. We can then say that it is legitimate to take accurate moral perceptions seriously, while false moral perceptions can be dismissed as mistakes (moral illusions).

However, all of this requires a theory of right moral answers that you cannot get out of brain scans.

Now, nothing I have written in this post proves that there are moral facts. What I have argued for is that, either there are moral facts, or there are not. If there are moral facts, then you cannot get those facts by looking at brain scans. Those brain scans tell you what happens in the brain when people think about morality, but they do not reveal morality itself. If there are no moral facts, then these brain scans are the study of an illusion – of a disposition to perceive something as having a property it does not have.

In other words, these scientists studying brain scans can say that they are studying beliefs about morality, though what they study tells us nothing about the truth of those beliefs. Or they can say that they are studying the phenomena of moral illusion, if there are no moral facts. However, they cannot truthfully claim that they are studying morality itself.

Imagine coming across an article in a psychology journal in which the researcher is reporting on the findings of his most recent work. He hooked a bunch of monitoring equipment up to people, then told them to think about stars, planets, asteroids, and comets. He shows them pictures so that he can measure the effects that the picture has on the brain, and asks them to say something about the object depicted in each image. All the while he says that he is an astronomer and that he is engaged in the study of astronomy – of stars, planets, asteroids, and comets.

The sensible reaction would be, “Hey, doc. I’m not saying that the work you are doing isn’t important. What I am saying is that what you are studying is not astronomy.” Whenever I read an article from some scientist reporting on what happens in the brain while his subjects think about moral matters, my answer is quite similar. “Hey, doc. I’m not saying that the work you are doing isn’t important. What I am saying is that what you are studying is not astronomy.”

Failure to recognize and respect this distinction generates a great deal of wasted effort and confusion.

1 comment:

Josh said...

Great Post! I've been giving a lot of thought to Secular Morality, especially by looking at things like Game Theory. This fits in with what you're saying. I find it very odd that people would think that you can find moral truth by looking at brain scans.

Although, it would be interesting to see if there was some correlation between right and wrong thoughts and the parts of the brain they occur in. I doubt if there is, but it would be a neat study.