364 days from today and I will be sitting in my first class. I hope. There is still some uncertainty about that. If somebody wants to just give me $2 million - we can put all of this uncertainty to rest.
This afternoon at 3:30 pm local time I hope to be attending a lecture at the University of Colorado: Prof. Neil Sinhababu (University of Singapore) on The Epistemic Argument for Hedonism. I have downloaded a copy of his paper on the subject that I have gone through - and hope to go through once more before the lecture - so that I can be prepared.
As a side note: one of the interesting things I found in the paper is an argument against the use of intuitions as a foundation to moral knowledge - an argument relevant to a comment I received a few days back. I want to find some time to address that issue in the near future.
Time . . .
I have been writing recently on the issue of time - and how I sometimes waste time on computer games. It is a disposition that I argued is morally objectionable; there are more important things to be spending time on.
This lead, in turn, to a discussion of free will and its relationship to moral responsibility. I asked in my last post about what it is we are supposed to find in "free will" that is worth having. In the universe as it exists, I can do what I want because I want to do it. That is to say, my desires (combined with my beliefs) are the proximate causes of my intentional actions. What is it, then, that this "free will" is supposed to offer? The option to do something that I have no interest in doing? Why is that important? It is still the case that if I had wanted to do them, I could have. I did not do something else because I did not want to do something else.
Quite by coincidence, in my current project of going through all of the episodes of Philosophy Bites, I came to an episode interviewing Daniel Dennett on Free Will Worth Having.=.
Dennett argued that philosophers are looking at the free will issue all wrong. They are looking at the issue of free will at the level of atoms, when, in fact, it is simply not relevant at that level. Dennett argued, by analogy, that one cannot explain why giraffes have long necks by explaining it at the level of atoms. This product of evolution requires a level of explanation at which organisms compete in nature where the acquisition of favorable genetic adaptations traits promotes reproduction. Similarly, he argues that free will is not a fit subject to talk about at the level of atoms. It is a subject fit to be used at the level of intentional agents - beings driven to act by their beliefs and desires (intentional states).
I think that I need to be paying more attention to Daniel Dennett. If an opportunity comes up in graduate school to do this, I will latch onto it.
Dennett appears in this podcast episode to be arguing that we have free will because we have the capacity to be unpredictable. Furthermore, we evolved the capacity to be unpredictable as a defense against being exploited. I am not inclined to follow Dennett down that road. I think that free will can be captured quite nicely using the compatibilist definition that it consists in the power to have done otherwise if one had wanted to.
This ties in with the link between free will and morality in that morality then adds the claim, "And you should have wanted to". This, itself, leads to the proposition that people generally have many and strong reasons to use its powers of reward and punishment to cause people generally to want to do or refrain from certain types of actions.
This, now, leads to another interesting coincidence - a podcast episode on Philosophy Bites interviewing Fiery Cushman on Moral Luck.
The reason that this episode is important is because Cushman is a research scientists who has been studying the psychology of reward and punishment.
I have generally found it difficult to find reliable information on the psychology of reward and punishment. I am grateful to have found this lead. (I like data. I do not think that philosophy can be done only from an armchair - though I also think that research scientists need to spend more time in the armchair thinking about what they are doing. I am more than happy to add the armchair work for others who are doing the research work - if only they are willing to listen to the guy in the armchair.)
First, let me link the subject of free will to moral responsibility. The claim seems to be that, without free will, we have no reason to reward or punish people - no reason to claim that they are responsible for their actions. After all, there are documented cases linking criminal behavior with brain lesions such that treating the lesion eliminated the criminal impulse. If there is a cause, then there is no moral responsibility.
However, this common claim does not correctly identify the phenomena. If there is a cause independent of intentional states that we can manipulate using reward and punishment, then we hold that reward and punishment are not applicable. This makes sense - it simply says that the tools are not relevant where they have no effect. But this is fully consistent with the fact that there are other cases where reward and punishment are effective tools.
Here is where we can bring in Cushman's research on moral luck. Cushman's was interested in the fact that we punish people at different levels based on consequences of their (wrongful) behavior that is out of the agent's control.
They use an example where two people share some drinks, then each get in their car to drive home.
One agent drives off the road. The cops come and they arrest him for drunk driving. He is fined a few hundred dollars.
The other agent also drives off the road - only he hits a couple of pedestrians on the sidewalk, killing them. He is convicted of vehicular manslaughter - and punished much more severely. Yet, the difference in punishment does not reflect a difference in moral character. In terms of character, our two agents may be identical. (Honestly, I sometimes shudder at the thought of how easily a life can take a wrong turn based on moral luck - how somebody morally no different from me could have had a much worse life simply because I was lucky that my mistakes have not resulted in others getting hurt.)
In Cushman's research, he found a disconnect between our moral judgments and our intuitions about punishment. People hold that the two agents deserve different levels of punishment, but they do not judge the person punished more severely to be a worse person. Moral luck does not seem to influence our judgments of people.
Cushman also reported on studies showing that people learn more quickly when rewards and punishments are based on consequences rather than intentions. They had people throw darts at a dart board, announcing their intended target before throwing. Before starting the experiment, the researcher picked some "good" numbers and "bad" numbers. Some subjects were rewarded or punished based on their announced intentions, and others based on their results. In this research, the subjects rewarded and punished based on results learned to distinguish between good numbers and bad numbers more quickly.
This research illustrates the principle that there are still reasons to reward or punish in a determined universe - reward and punishment alters mental states and, consequently, influences behavior. Unfortunately, this research focuses on belief acquisition rather than on the use of reward and punishment to alter desires. When it comes to belief acquisition, I can think of a much more efficient method to alter a person's beliefs about good numbers and bad numbers than rewarding or punishing a person throwing darts. Just tell that person what the good numbers and bad numbers are.
Research on the influence of reward and punishment on desires still seems scarce. The only area where I have found this subject discussed in any detail is in discussing addictions - the acquisition of desires that thwart future desires. The researchers in that field say that addictions hijack a system that has a common use - but they say almost nothing about that common use. I would not mind seeing some improvement in this area.
The moral of our story is that, yes, it is the case that brain lesions and the like provide a defense from moral culpability. This is only because (and when) they place those mental states out of the range of normal reward and punishment. It does not matter how much you yell at, condemn, or punish an individual, it will not shrink a brain tumor. However, as long as reward and punishments have effects, then people will have reason to use them, even in a determined universe - particularly in a determined universe. The fact that desires are caused is no defense against reward or punishment, as long as reward and punishment are counted among those causes. The fact that desires are caused is no threat to moral responsibility. The fact that my own interest in computer games was caused is no defense against the fact that people generally have many and strong reasons to condemn those who waste their precious time playing such games.
Monday, August 29, 2016
Free Will and Moral Responsibility
Posted by Alonzo Fyfe at 8:14 AM
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment