Friday, April 04, 2008

E2.0: Adam Kolber: Brain Studies and the Law

This is the 29th in a new series of weekend posts taken from the presentations at the Salk Institute’s "Beyond Belief: Enlightenment 2.0.". I have placed an index of essays in this series in an introductory post, Enlightenment 2.0: Introduction.

There are tough moral questions. If somebody is looking for a moral theory where they can simply plug in a group of variables and instantly get out a proposition about what is right and wrong without doing any significant work, one is in for a tremendous disappointment. The only moral system that can accomplish this end is one where the individual simply makes up the answer. Unfortunately, their ‘morality’ is a make-believe morality that has no real-world application.

Adam Kolber visiting fellow at Princeton University's Center for Human Values, came to the Beyond Belief conference to discuss some of the implications that advances in biotechnology has on the field of law. Since his topic was not about its implications for what the law is, but its implications for what the law ought to be, he was actually talking about the moral implications of these advances in our knowledge. Some of these qualify as tough moral questions.

Extrapolating from Kolber’s presentation, imagine the following scenario.

A woman comes into the emergency room after having been severely brutalized in an attack. Research in brain science has revealed a treatment that can cause a person to forget the events of the past few hours. Research shows that those whose memories have been erased have an easier time dealing with the trauma because they only know about it in the third person. They are not constantly reliving the memories of the event. However, if this woman gets this treatment, then she will not be able to identify the person who attacked her, or recall anything about the crime that might help investigators.

So, what is a person’s obligation to endure life-altering trauma for the sake of helping to catch a perpetrator?

Also, if it is possible for people to use this technology to avoid some of the more harmful aspects of a crime, then to what degree should we hold perpetrators responsible for harm that the victim could have avoided? For example, there may be a drug that dampens the emotional impact of recent memories such that, if a woman is raped, and receives the drug shortly after being raped, she tends not to mind so much (not to be so heavily traumatized) by the rape. So, can this be used to argue that rape is now a less serious crime, because (at least for those who take the drug) it does less harm? Is the rapist, or is the victim, responsible for the harms done if the victim refuses this treatment?

From a desire utilitarian perspective, I can examine these types of questions and state, with all possible certainty, that I do not know how to answer these problems. Desire utilitarianism admit that there are a lot of factors that must be weighed, and it is not easy to weigh them. For the most part, it would be useful if different communities adopted different standards so that we can see the effects, and then make future decisions based on that data.

One of the contributions that I can make to this discussion as an ethicist is to say that certain concerns that some might bring to the table just do not matter. They postulate values that do not exist, or they postulate attempts to mischaracterize what are nothing more than desires so as to give them more weight in moral calculations than they should have.

For example, Kolber mentioned some work that the President’s Commission on bio-Ethics recently did on the question of enhancement – for example, using chemicals to improve strength or memory. Many of the members of the panel who contributed to the Commission’s findings obviously held that there was something intrinsically valuable in that which is 'natural' – that 'enhancements' are immoral because they are unnatural.

Well, there is no such thing as intrinsic value. What we are dealing here is not with something that has intrinsic merit. Instead, a certain group of people either (1) acquired desire for that which has intrinsic merit and a false belief that what is natural has intrinsic merit, or (2) simply acquired a desire to preserve that which is natural that they wish to give more weight, so they misrepresent their desire as a perception of intrinsic merit – as something more important than a mere desire.

Sorry, no. There is no such entity as intrinsic merit. 'Natural' only has value to the extent that what is true in a state where things are natural is that which tends to fulfill good desires. A desire for that which is natural is good only to the degree that the desire is something that tends to fulfill other desires. In order to make a moral case for that which is natural, we have to defend the desire for that which is natural as a desire that tends to fulfill other desires. It does not identify anything of intrinsic merit.

This makes the question a little easier to answer because we were able to remove some of the clutter. Or, to the degree that we allow people to bring this clutter into the discussion, to that degree we risk reaching conclusions (based on false premises) that simply are not true. We would, in other words, be writing moral fictions into the law – which means that people will be suffering the thwarting of desires that are not being thwarted for any good reason.

Of course, religious reasons for various moral conclusions can also be thrown out. The desire to please God or to serve God never provides a legitimate reason-for-action. They are fictions that lead us away from the correct real-world moral option. They are fictions that, if used, will result in the thwarting of desires that could have otherwise been fulfilled, or that will bring about states of affairs that people have real-world reasons to prevent bringing about.

Another issue that will be raised by advances in the science of the mind is the issue of mind-reading. We are already able to make determinations of what parts of the brain are used in different experiences. As we get more and more precise information about these phenomena, we will be able to determine more and more of what a person is actually thinking – the actual brain processes – behind particular behavior.

Kolber mentions that a substantial proportion of the payouts that are given out in tort cases have to do with pain. The plaintiff in these cases report being in a certain amount of pain and that being forced to endure this pain should involve a particular cost on the part of the defendant.

In economic terms, it is said that the defendants owe compensation to the degree that puts the defendant on the same point of an indifference curve. That is to say, the defendant is indifferent to the option of choosing the pain with the money, and giving up both the pain and the money. Anything below this indifference curve and the plaintiff is suffering uncompensated harm. Anything above this indifference curve and the defendant is being forced to provide compensation for harms that are not real.

What if brain scans can help to determine how much pain a person is in? In fact, what if brain scans can help to determine if the accused is lying about his or her pain? In this type of example, let us assume that research shows that one part of the brain lights up when an agent is creating a story about being in pain or acting like a person in pain, and another part lights up when the agent is in pain as a matter of fact.

So, can plaintiffs be forced to undergo exams to determine the amount of pain they are in? Would this type of examination involve an invasion of privacy? Would they violate the prohibition on self-incrimination?

<> does not answer much of an answer for the issues that he brought up. Instead, he merely focuses on pointing out that the information that we are acquiring, and the possibilities that we are creating with this knowledge, will have legal implications. It will have implications for fundamental legal concepts of moral and legal culpability, for the degrees and types of harm, for ways of avoiding harm, and for the degree to which agents will be permitted to alter their own minds/brains.

A lot of the possibilities that one could write about are still within the realm of science fiction. However, some of them will eventually go from fiction to fact. More importantly, researchers may well open up possibilities that science fiction authors never imagined

This intersection between brain science and morality/law promises to be a very interesting area for future research.

1 comment:

  1. There's a simple rebuttal to someone who makes the mistake that "natural" = "good" - just list some natural things that are very, very bad for a person, such as smallpox virus, botulism toxin, hypothermia...

    ReplyDelete