Wednesday, January 02, 2008

Studying Morality through Brain Scans

I have a question from the studio audience regarding the relevance of scientific brain research on morality. Specifically, when scientists do brain research, such as when they put people into MRIs and ask them questions or to make choices in order to determine how the brain functions in these circumstances, what does this say about morality?

Actually, it says absolutely nothing.

Imagine putting an astronomer into a cat scan and asking him all sorts of questions about black holes or quasars. You get an image that tells you what parts of the brain were active at the time and the sequence of neural findings.

However, when you do this, you learn absolutely nothing about black holes.

It is quite bizarre for somebody to be conducting these types of experiments – doing brain studies of astronomers while they think about astronomical concepts, and for the researcher to be claiming that he, too, is an astronomer. Measuring brain states of astronomers thinking about astronomical concepts is not, itself, astronomy.

In particular, while the brain researcher can get a clue as to how the astronomer thinks about black holes, he cannot infer from his brain scans which of those propositions are actually correct. He can get an image of Stephen Hawking’s brain while Hawkins thinks about evaporating black holes. However, he cannot use his brain scans as proof that black holes do, in fact, evaporate.

Measuring brain states of people while they think about moral concepts is not, itself, ethics (or moral philosophy). The scientist is measuring what is going on in the brain. In doing so, he can get an image of what happens when an agent concludes that homosexuals should be killed. However, he cannot use his brain scans as proof that homosexuals, in fact, should be killed.

So, how do you ‘test’ desire utilitarianism. How do you prove that desire utilitarianism is true and some other theory is not true?

Desire utilitarianism makes a claim about reasons for action that exist. It says that true value statements are statements about relationships between objects of evaluation and reasons for action that exist, that desires are the only reasons for action that exist, that desires are propositional attitudes – mental states that can be expressed in the form of ‘agent desires that P’ for some proposition P, and that agents act so as to realize states of affairs in which the propositions that are the objects of their desires are true.

Furthermore, because desires are the only reasons for action that exist, the only reasons for action that exist for promoting or inhibiting certain malleable desires exists in virtue of their relationships to other desires. No other reason or action exists for promoting or inhibiting malleable desires.

These propositions are used to explain and predict intentional behavior. If some other theory comes along that does a better job of giving us explanations and predictions of intentional behavior, and that theory postulates reasons for action other than desires, then desire utilitarianism should be rejected in favor of that other theory.

The fact is, whether a person is good or evil, you can get a brain scan of that person’s mental behavior. The idea that you can study ethics by studying brain scans implies that you can look at a brain scan and, from that data alone, make moral judgments.

Take a bunch of brain scans of homosexuals looking at homosexual pornography. Of course there is something going on in their brain when they do this. However, the idea of using scientific research to study ethics suggests that you can look at this data and, knowing nothing else, determine whether homosexuality is moral or immoral.

I would like to know how what part of the brain scan one is supposed to look at in order to determine this morality. What are we going to see in the brain scan of a homosexual looking at homosexual pornography if it is moral that we will not see if it is immoral?


That’s my hypothesis. You can collect all of the brain-scan data you can imagine, and you will still not have any evidence relevant to its moral permissibility or impermissibility.

Take a bunch of brain scans of KKK members while they contemplate the appropriate behavior that whites should have towards blacks. After collecting this stack of brain scans, tell me which part of the brain scan proves the wrongness of racism? What would it take – how would the brain scans of racists be different – if it were the case that racism is permissible.

What about slavery? What difference would we (theoretically) find in the brain scan of slave owners to prove that slavery is wrong? If slavery were not wrong – if those who said that slavery is permissible were correct – how would brain scans of people contemplating slavery be different so as to prove that slavery is morally permissible?

These should be taken as nonsense questions. Indeed, they are. They are nonsense precisely because you cannot study brain scans to get at moral facts any more than you can study brain scans to get at astronomical facts.

Let me use one more example to illustrate a crucial part of this argument.

Imagine a study where research subjects are given a complex mathematical equation to work out in their head. Because of its complexity, people come up with different answers. We take brain scans of all of these people. We even discover that those who get the right answer have different brain processes than those who get the wrong answer, so the researchers can sort the brain scans into two piles – those with feature X, and those without feature X. Let us assume that having feature X corresponds to getting the right answer.

How do we know that this is the right answer?

In order to use this experiment, we have to know what the right answer is before we conduct the experiment. Somebody has to take this complex mathematical equation and actually solve it, using the rules and principles of mathematics, before we can say, “These people got the answer right; those people did not.”

Again, the same applies to morality. We can take brain scans of people who get to the conclusion that homosexuality is immoral. We can take brain scans of people who believe that homosexuality is not immoral. However, before we can determine which if these two groups has the right answer, we first have to do something (comparable to working out the math problem) that tells us what the right answer is.

What are the rules for getting ‘right answers’ to moral questions?

Is there even such a thing as a right answer to moral questions?

Even somebody who says that there are no right answers – that all there is for us to study are the brain scans themselves – is still trapped by the need to prove (separately from the brain scans) that there are no right answers to moral questions. We cannot get that conclusion by looking at the fact that different people undergo slightly different mental processes when they contemplate a moral issue. Certainly, we can take brain scans of people reaching different moral conclusions. However, to decide that neither of them can be right, like deciding that one of them is right and the other wrong, requires looking at something other than the brain scans.

This is also not to say that morality lies outside of the realm of science. The fact that an astronomer cannot study black holes by looking at brain scans of astronomers thinking about black holes does not prove that the study of black holes is subjective. Nor does it prove that the study of the brain scans is not a science. It proves that the study of black holes is not the study of brain scans – that black holes are not to be found in brain scans.

Similarly, the fact that an ethicist cannot study right and wrong by looking at brain scans of people contemplating moral questions does not prove that the study of morality is subjective. Nor does it prove that the study of brain scans is not a science. It proves that the study of morality is not the study of brain scans – that morality is not something to be found in brain scans.

The astronomer needs to actually study (the effects of) these real objects of material so dense that no light can escape them.

The ethicist needs to study actual relationships between states of affairs and reasons for action that exist.


Anonymous said...

There are two kinds of what might be called "theories of morality."

One kind of theory attempts to define what kind of questions should be addressed by "moral reasoning" and then proposes answers to those questions. These are often referred to as "prescriptive" moral theories.

I think I had a point somewhere in there, but I forgot what it was...
Another kind of theory attempts to describe what "ordinary" people mean when they use moral language, how they perform "moral reasoning", and what kinds of conclusions they arrive at when they do perform moral reasoning. This type of theory is often referred to as a "descriptive" moral theory.

To stretch an analogy, a prescriptive theory of arithmetic would say that "1+1=2" is arithmetically right, "1+1=10" is arithmetically wrong, and "John went to the bank" is not an arithmetic statement at all. A descriptive theory of arithmetic would describe how a calculator (or mathematician) decides what answer to give when asked an arithmetical question, such as "What is 1+1?" without giving an opinion on what the answer ought to be.

The "evolution is the origin of morality" people, as well as the brain scan people, are providing theories of the second type. If people have a genetic predisposition to believe that, say, adulterers should be punished, then all it claims is that people will, in fact, usually argue in favor of punishing adulterers. It does not claim that adultery is in fact wrong, only that people will say that it is wrong.

We can't learn much about mathematics by putting mathematicians in brain scanning machines, but we might be able to learn about how mathematics is done by doing so. If you're making a theory of mathematicians, then brain scans make sense. People studying how people make moral decisions often describe their work as studying morality, when it's really a study of how morality is done; the two topics have been so conflated that we often use the same word for them.

Studying morality is like studying mathematics in a world where calculators are commonplace, but we have no idea how they work and, what's worse, many of them disagree with each other. In such a world, people studying mathematics might attempt to build their own calculators that produce correct answers, or they might attempt to reverse-engineer the calculators that already exist.

Anonymous said...

yuck, my comment got mangled...

"I think I had a point somewhere in there, but I forgot what it was..."

was supposed to be at the end of the comment.

Uber Miguel said...


Seems like your analogy is skipping a step. While brains can (but certainly do not yet) tell us what makes up the components and processing of an equation, they do not give us any insight at all as to what the answer will be, could be, or should be - without also understanding the moral process the brain will be using to produce.

In other words, brain scans can help us identify that 1, +, and = are being used, where they are derived, where the roots of them are stored and processed, but it cannot predict what the outcome will be any sooner than a mind-reading brain scan actually finishes the equation without also knowing precisely what the rules of the equation are.

We're still not even at obtaining a descriptive theory of arithmetic or morality at this point - just a descriptive theory on the inner working parts used in arithmetic or morality. IE: brain scans help us determine the substance and interaction of our desires in a moral process, but to know how the calculation is playing out, you'd have to first know and understand the process.

In the case of an astronomer, without also knowing what the astronomer knows on particle physics (ie: what particles, black holes, and the interacting processes mean to that particular astronomer), looking at the brain will not allow you to predict the outcome of any astronomical calculation occuring in an astronomer's brain. If you're going to study the brain scans of an astronomer, you'll first have to know the astronomy being used - If you're going to study the brain scan of a moral process, you'll first have to know the descriptive moral theory being used. Brain scans do not provide us with even descriptive moral theories.

Also, to say that desire utilitarianism is just a prescriptive theory is to rule out part of the theory - it has both prescriptive and descriptive aspects. The descriptive aspect deals with isolating and defining the components (1, +, and =) as well as how the components interact. The prescriptive aspect helps us determine what kind of outcomes could be predicted (although not really to the point of being able to entirely predict outcomes on a practical level).

The lines blur a little bit when we understand that DU calculation components are much too ambiguous for such specific calculations to be applied in a general sense. Generally, DU calculations are not "1+1=2" but "a, where p tends to z; and b, where q tends to y; and where c > d, e, f, g in terms of p and q; therefore c" - and perhaps, DU is really a step or two even more generally removed than something like that. In a real-world use of DU, all the variables and their relationships are subject to change depending on the situation. Even between similar situations, outcomes may be different depending on such subtle changes.

This uber-dynamic aspect of moral descisions could very well explain why it's taken us so long to pin it down: it doesn't quite fit the traditional prescriptive/descriptive distinctions of most all the other theoretical fields. Much of the mess likely comes from the fact that morality is one of the few fields to deal with which relative solutions work best by objectively weighing the probable outcomes concerning subjective brain states. How many other fields of theory so heavily involve aspects of subjectivity, objectivity, relativity, and probability?

Baconeater said...

We can possibly use brain scans to see what parts of the brain are affected by ethical or moral type issues.
For example, if subjects watch a film where an animated pet is shot for no reason, I'm quite sure that parts of the brain that we use when faced with moral situations will light up like crazy.
Once discovered, researchers can go quite a few steps further.

Alonzo Fyfe said...


It would be absurd to argue that brain scans can tell us nothing about brains - including what happens in a brain when presented with a morally charged situation.

However, the brain scan cannot tell us what, in fact, ought to be done.

You can conduct a brain scan of a person reasoning through a problem. However, the brain scan will not tell you if the reasoning is valid or invalid. A valid inference might have a different signature from an invalid inference. However, the brain scan can not tell you that a particular inference is valid or invalid - whether his reasoning is sound or unsound.

To measure validity and soundness, you have to study logic. And people have been studying logic for millenia, and coming up with solid answers, without doing brain scans. Ultimately, the brain scans are no help - validity cannot be proved by appeal to a brain scan.

Anonymous said...

While it would clearly be a straw man to claim one could develop a morality based on brain scans alone, it is not at all beyond the realm of possibility that the results of these investigations could have some moral implications.
For instance if the origin of a particular desire were to be traced to a primitive part of the brain highly resistant to change, we might consider it differently than one coming from a more malleable brain area. The scan would tell us nothing about the moral correctness of the desire, but it might play a role in framing one’s response.

Baconeater said...

Alonzo, some would argue that the closest thing to universal morality is the social contract when it comes to humans, and to go further, much of the social contract is innate in us.
Of course, there is no ultimate right or wrong, but if 96 out of 100 people's brains react the same when an animated pet gets decapitated, or that 94 out of hundred people's brains react the same when a crazy man throws a baby overboard from a cruise ship, perhaps some assumptions can be made.

Anonymous said...

I suspect that most people don't use a particularly coherent moral theory when they make judgments... moral reasoning seems to work something like this: "This feels wrong to me, therefore there must be a reason why it is wrong. Ah, it must be X." Eventually, even if all X's the person came up with can be shown to be factually incorrect, people don't change their minds; they resort to "I can't explain why it's wrong, but I know that it is!"

In other words, rationalization, not reasoning.

Consider this: people who grew up with opposite-sex siblings are more likely to describe brother-sister incest as morally wrong even when controlling for just about everything else related to family structure and cultural values.

However the brain's "morality module" works, it doesn't always make rational decisions!

Alonzo Fyfe said...

doug s.

I agree that people allow emotion to interfere with moral judgment.

They also allow emotion to interfere with factual judgment.

If 94% of the people agree that there is a God, the question, "Is there a God" is still not a question that can be answered by looking at whether 94% of the people believe it or not.

If 99% of the people appeal to their likes and dislikes in determining whether a proposition is true or false, the inference, "I want X to be true; therefore, X is true" is still not a valid inference.

There is a lot of noise in the moral reasoning of a vast majority of the people. But what parts are noise? And what parts are signal? And how do you tell the difference?

Brain scans will not answer these questions.

DM said...

Morality, be it prescriptive or descriptive, is about how people interact socially including what is important to them in both positive and negative senses. If we want a morality that is relevant to humans we need to know what is important to people and what is desired by them. One could interview people, but they may not be fully truthful in their responses or may not give quantitative answers etc. Brain scans are a more objective way to find out what is important to people including what they desire and what they hate.

And of course that is only part of the puzzle; one also needs to fit all these things into a framework and build a morality to organize it, but that is someone else's job.

Alonzo Fyfe said...


Everything you said is true. The thesis that morality is not found in brain states does not imply that brain states are not relevant to morality. Morality is not found in atomic structure, but atomoc structure is relevant to morality. Particularly if, for example, a particular atomic structure identifies an object as a poison, for example.

There is still a tremendous gap between the concept of what a person does desire and what he should desire. The brain scan may give you an answer to the former question, but it cannot come up with an answer to the second question without some outside help.

DM said...

Ah, but why should someone desire something or do something? Is ethics purely a duty with no underlying reason, or is it rather a contingent part of human existence? I think the latter: we ought to do such and such to better our and others lives. For example, we should tell the truth not because this is a duty, but because it leads to better outcomes individually and as a society.

George Smith, in his book "Athiesm" has an interesting analysis of this. I put my interpretation of it on my blog:

Thus if ethics exists to better and animal life, then it makes sense that a central focus should be to study what makes human or animal life better. And brain scans may be one way.