Morality requires determinism.
It is often thought that morality requires free will. To say that an agent ought to have done something else implies that he could have done something else. Which means that if it is not the case that he could have done something else, then it is not the case that he ought to have done something else.
Determinism - which holds that our behavior is determined by prior causes in the same way that the motion of an object through space is determined by prior causes - says that a person could not have acted differently. The laws of physics determine the motion of every electron, proton, and neutron on our body, meaning that they determine the motion of the body itself. There is no free will. It is never the case that an agent could have overthrown these laws of physics and done something else.
Therefore, it is never the case that an agent ought to have done something else. Any claim that they ought to have is false - built on this false assumption that humans have a supernatural capacity to overthrow the laws of physics.
That is the conventional view of the relationship between determinism and morality.
Desirism and Determinism
However, desirism, as a moral theory, actually requires determinism. With desirism, if free will did exist - if humans actually had the capacity to overthrow the laws of physics and act in violation of those laws - this would introduce a complication that the theory could not handle. Fortunately, free will does not exist.
Desirism requires that the assumption that our actions are caused by our beliefs, desires, habits, and the like. It requires that desires determine ends or goals and that agents act in a determined way to try to realize those goals. It requires that some desires are malleable - meaning that interaction with the external world can create, strengthen, weaken, or exterminate those desires. It requires that the types of experiences capable of molding desires includes praise, condemnation, reward, and punishment. It requires that those who use these tools have the ability to predict, at least roughly, what their effects will be on the desires - and through them on the actions - of other agents.
These facts make it possible for one person to influence the actions of another by influencing the causes of those action, and to influence the causes of action by using tools that themselves have an effect on the causes of action. Those tools - praise, condemnation, reward, and punishment - act on the reward center of the brain to alter the desires ofother agents. If we throw free will into the soup, the system becomes a hopeless muddle. It is best to leave it out - unless somebody can provide real evidence that it is real.
Praise, Condemnation, Free Will, and Malleable Desires
The question remains, "How can you condemn somebody by saying he should have done something else when he could not have done something else?"
One counter-question I have relevant to this question is, "How is it the case that praise and condemnation are a legitimate response to an act of free will?" Perhaps a legitimate response to a good act is to spin clockwise and to a bad act is to spin counter-clockwise. It makes no sense - there is nothing in a good act that actually implies that agents should spin clockwise, or in a bad act that implies that an agent should respond by spinning counter-clockwise. However, it is also the case that there is no implication from the fact that an act was was a product of free will, that this justifies condemnation (if it was a bad act) or praise (if it was a good act).
Desirism has a way of linking good and bad actions with praise and condemnation. Praise and condemnation serve a purpose. The reason for using these tools is because of their impact on molding desires. This allows agents to promote desires they have reason to promote and inhibit desires they have reason to inhibit. A bad act is caused by malleable desires that people have reason to inhibit or eliminate, or by the lack of malleable desires that people have reason to create or strengthen. Praise and condemnation are the tools to be used to inhibit or eliminate bad desires, or to create and promote good desires. This explains why bad actions are to be met with condemnation and good desires are to be met with praise.
On the issue that it is wrong to condemn somebody unless they could have done otherwise, desirism uses the compatibalist account of "could have done otherwise". This account reduces "could have done otherwise" to "would have done otherwise if he had wanted to," and "could have wanted to." "Could have wanted to," in turn, refers to the fact that the wants that are relevant are malleable desires - desires that can be molded through acts such as praise and condemnation. "Ought" implies "can" comes from the fact that it only makes sense to apply these tools where desires are malleable - where, in a sense, the desires of agents can be changed.
Choice in a Determined World
A final concern on the issue of free will is the question of how to deal with the illusion of choice. When an agent is deciding whether to take the money out of a co-worker's desk drawer, he seems to actually have a choice. Determinism says there is no actual choice - choice is, at best, an illusion. He either will or will not take the money and, whichever he does he does necessarily.
Computers have shown us how choice is possible in a determined world.
A chess-playing computer goes thorough a set of possible moves, measuring the outcome of each option, and then deciding on the outcome with the highest value. It evaluates the different moves available to it as if each move were a genuine possibility. It evaluates KP2-KP4 and determines the value of the resulting states as if it actually has the option of choosing KP2-KP4, and it evaluates QP2-QP4 and determines the value of its resulting states as if this is a real option. Then, it picks the option that realizes the highest value.
We can ask a question about the machine, "Why is it evaluating options these options as if they are real options when all outcomes are determined? "
The answer is that it is because the outcome is determined by a system that examines each option and determines the value of resulting states. When the computer examines KP2-KP4, this is a possible move in any sense that matters. The sense that matters is the sense that, if the computer determines that this move would produce the result with the highest value, then the computer will make this move.
In other words, the computer "could have done otherwise if it had wanted to".
This does not prove that computers are moral agents (yet), but it does demonstrate how choice is possible in a determined system. Technically, it even provides a clue as to what else is needed to provide computers with artificial morality. They need a system by which they can alter their desires (the values they assign to different end-states) based on their interactions with the environment, and for other systems to mold the value it assigns to end-states by influencing the interactions between other machines and their environment.
Conclusion
Morality is not only possible in the absence of free will, morality requires determinism. It requires that actions be caused by beliefs, desires, and other mental states. It requires that some desires are malleable in the sense that interactions with the environment will influence their strength and even their existence. It requires that agents can mold the desires of other agents by molding the relevant interactions with the environment. It requires that the desires of agents give them reason to influence the relevant interactions between other agents and the environment - to provide praise and condemnation, rewards and punishment.
Free will, if it existed, would only mess up the equation.
Wednesday, August 22, 2012
Morality, Free Will, and Determinism
Posted by Alonzo Fyfe at 8:18 AM
Subscribe to:
Post Comments (Atom)
2 comments:
Great post, makes it easy to see the connections between determinism and desirism, and then both of those against the idea of free will was excellent. I'm new to this way of thinking, but your blog is one of the best I've seen.
You say "This allows agents to promote desires they have reason to promote and inhibit desires they have reason to inhibit.". Do you mean by promoting "calculating the outcome"? If so, is it helping the morality problem? Isn't it just a matter of calculating errors? If I decide to steal I might have made the calculation that it is risk free. If I don't get caught, was it still immoral?
Post a Comment