179 days until the first class.
Last Thursday, February 23, I was able to attend one of the Philosophy 5100 class sessions. In that session, two topics came up that interest me. The professor asked our opinion of what we thought of the relationship between neuroscience and moral philosophy, and we discussed the distinction between "harming and not-helping".
Well, I have decided to write a lengthy email answering those topics - one that presents some of the core ideas of desirism.
I have included a copy of that email below. Or, at least a first draft. As is usually the case, once I think about it some, I may make some changes, but this is how things stand so far:
[I]At the last class I was able to attend, you asked two questions. I would like to give a somewhat detailed answer to both of those questions, as my answers are related.
One question concerned the relevance of neuroscience to moral philosophy.
I think that neuroscientists are looking in the wrong place. Instead of looking at the causes of our moral judgments, they really should be looking at the effects. In this, they should be focused specifically on the effects of reward and punishment, and praise and condemnation.
I have answered this question about the relevance of the studies we have examined in this course in an earlier email. There, I argued that, if I were to take some random people off the street and ask them to give their opinion on whether human activity increased greenhouse gas emissions and the effect of those emissions on global climate, I would likely learn quite a bit about how the brain works. However, I would be making a significant mistake if I were to call myself a climatologist and claim that my research allowed me to actually help answer the question, "Does human activity contribute to global warming?"
Similarly, brain scans concerning the formation of moral beliefs are not relevant to whether those beliefs are true or false.
A critic could complain that I am being rash in assuming that that moral claims can be true or false in the same way that climate claims can be true or false. However, I would counter that my opponents are being rash in assuming that they are different. Their investigations beg this very question; they assume that the study of the formation of moral beliefs is the same the study of morality itself.
I do not want to claim that neuroscience is irrelevant. In fact, I believe that it can be relevant. However, the neuroscience that we have been discussing is looking in the wrong place. The useful information that neuroscience can provide does not come from the study of the causes of moral judgments, but from the study of their effects (or the effects of their expressions).
I hope to get away with simply asserting that reward and punishment are core components of morality. This relationship becomes stronger if I can make the additional assertion that praise functions as a reward and that condemnation functions as a type of punishment. The question, “What is morally right/wrong?” is at least overlaps with the question, "What do we have reason to reward or punish; to praise or condemn?"
Neuroscience can provide us with useful information about the effects of reward and punishment (including praise and condemnation) on the brain and, from there, on behavior. This, in turn, can be used to determine what we have reason to reward or punish.
Philosophers and others have focused a lot of attention on one of the affects of reward and punishment. This concerns the use of rewards to provide incentives, and the use of punishment to provide deterrence. In a situation where Agent1 has a "desire that P", and Agent2 has a "desire that Q", Agent1 can offer Agent2 a bargain of the form, "If you help me realize P, then I will help you realize Q.” Alternatively, Agent1 can say, "If you do not help me in realizing P - or if you interfere in my attempts to realize P - then I will prevent you from realizing Q."
I think it is important to note that these types of bargains often have nothing to do with morality. They include such things as, "I will give you $20 if you will let me have that book," and "If you do not give me $20, then I will not let you have this book." If we are going to characterize morality in terms of this use of reward and punishments, then we are going to have to find what distinguishes standard bargains from morality.
I think that neuroscience can be relevant in the study of another set of potential effects of reward and punishment. This involves their use in molding the desires and aversions of agents - in creating reasons to perform certain actions and avoiding others for their own sake, and not for the sake of obtaining a reward and avoiding a punishment.
Rewards and punishments activate the mesolimbic pathway. Briefly, the mesolimbic pathway begins at the ventral tegmental area, transmits to the nucleus accumbens, and from there to the prefrontal cortex. The prefrontal cortex, in turn, seems to be responsible for conforming behavior to social norms. Damage to this area tends to interfere with an agent's ability to conform behavior to social norms.
The idea here is that this system takes rewards and punishments (and praise and condemnation) and uses them to extrapolate a set of social rules of behavior and provides the motivation to conform one's behavior to those standards. This function can be compared to the way the auditory system takes sounds and, from them, extrapolates the meanings of terms and rules of grammar – and conforms writing and speech (more or less) to those meanings and rules.
If this is accurate, it invites us to ask the question, “What social norms do we have reason to cause people to have?"
This, in turn, invites the question, "What do we have reason to reward or punish – or praise or condemn?"
For example, I think that it would be easy to argue that people generally have reasons to promote aversions to act-types such as lying, breaking promises, taking property without consent, vandalism, assault, rape, and murder. People generally have many and strong reasons to prefer to be surrounded by people who have these attitudes, meaning that people generally have many and strong reasons to use rewards and punishment (including praise and condemnation) to create and shape those attitudes.
At this point, I want to look at the points raised in the class discussion on Krum mentioned in “Moral Intuitions, Cognitive Psychology, and the Harming-versus-Not-Aiding Distinction.”
Before examining the distinction between harming and not-aiding, I would like to look at a more mundane attitude - the aversion to personal pain as distinct from the aversion to the pain that others experience as their own.
Imagine that some sort of omnipotent being were to use its powers to cause a person to feel everybody’s pain as if it were theiri own. That is not to say that it would have the same qualia, but that it’s relief would generate the same sense of urgency. One would be as concerned about the person with third degree burns over 70% of its body as one would be about third degree burns over 70% of one’s own body. At the same time, one would feel the same urgency to end the pain of the dissident in a distant prison being tortured, each and every person passing a kidney stone, and every hunger pang of every starving child.
Ultimately, it would be unendurable. We are clearly dealing with an unrealistic “science fiction” story – and a situation that will find no place in real human societies.
The suggestion that we adopt the same attitude towards killings – whether natural or human-made – that we would have to our own acts of killing is as unrealistic as the suggestion that we have the same attitudes towards pains as we have towards our own pain. The same can be said about adopting the same attitude towards all broken promises as one would adopt towards breaking a promise, or towards the punishment of an innocent person that one would adopt towards punishing an innocent person. In all cases, we can expect the sense of importance in not being the author of such an event to significantly exceed the importance of preventing such an event.
To tie this discussion in with the previous discussion, I would like to ask about the use of reward and punishment to promote an aversion to the existence of lying, as opposed to an aversion to lying.
The latter aversion can be brought about by punishing (which includes condemning) the person who lies and praising those who are honest. In contrast, the former aversion would require punishing everybody for every lie that is told. We can follow the first prescription easily enough. However, the latter would have every one of us under a cloud of permanent condemnation and punishment. Human societies are too large and complex to expect that we can bring about a state of affairs in which, in spite of our best efforts, there are not lies to condemn people for.
One could object that the condemnation and punishment would be applied only to killings (pains, lyings, punishings of the innocent, breaking of promises) that one can prevent. However, it does not work this way for one’s own pain. I do not just have an aversion to those of my own pains that I can prevent (though I have sometimes wished this were the case).
The aversion to all pains provide motives to research ways to prevent in the future those pains that cannot be prevented today. In fact, it is the aversion to pain itself that motivates the hunt for ways to prevent it. if I only had an aversion to pains that I could not prevent, this would likely motivate me to engineer an environment in which I lacked the ability to prevent many pains – thus ridding myself of my aversion to them.
We would need a comparable aversion to the pains of others, including those we cannot prevent, to motivate us to look for ways to prevent those pains with the same urgency that we hunt for ways to avoid our own. Similarly, we would need an aversion to all killings, lyings, punishings of the innocent, and breaking of promises that is comparable to our own aversion to killing, lying, punishing the innocent, and breaking promises to motivate us to hunt for ways to prevent these other sources of harm.
To make a long story short, I think that the neuroscience would show that the capacity to use rewards (including praise) and punishments (including condemnation) to create aversions to killing, lying, breaking promises, and causing pain to be significantly different from our capacity to use these tools to promote aversions to killings, lyings, breakings of promises, and pains everywhere.
If neuroscientists were to focus on the effects of moral judgments (particularly the use of reward, punishment, praise, and condemnation), they may find some of the reasons why we have gotten into the habit of using some moral judgments rather than others.[/I]
Thursday, March 02, 2017
On the Neuroscience of Reward and Punishment and the Harming vs. Not-Aiding Distinction
Posted by Alonzo Fyfe at 8:50 AM
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment