Friday, June 15, 2018

Revisiting Reasons: Part 2

I have received a response from my (potential) thesis adviser regarding my initial presentation of ideas.

I addressed her concerns in a return email which is partially reproduced below:


In my last email I sought to establish that what The Stanford Encyclopedia of Philosophy calls “the Humean Theory of Reasons” is inconsistent with other things that a Humean says about reasons. If this is essential to the Humean Theory of Reasons, then that theory is self-contradictory. But, we can remove the contradiction, preserving almost everything else that Hume says about reasons, by adopting the proposed alternative:

If there is a reason for someone to do something, then there must be some desire that would be served by her doing it, which is the source of that reason; and if she has a reason to do something, then she must have a desire that would be served by doing it, which is the source of that reason.

Recall, my argument was that “if there is a reason, then Sally has a desire” is false - and false under assumptions that a Humean would accept. When we looked at Jim's aversion to pain, we have an instance where the antecedent is true (there is a reason), but the consequent is false (Sally has a desire). The best alternative is: "There is a reason" implies "there is a desire" and "Sally has a reason" implies "Sally has a desire."

Now, as to your responses:

You wrote:

The view that "there is a reason" implies "somebody has a desire" is a version of utilitarianism.

I see two ways to respond to this:

Response 1: If it is true that this is a version of utilitarianism, then so be it. It is better to have a version of utilitarianism than the false conditional.

Response 2: This isn't actually a version of utilitarianism. It appears to be - and, for a long time, I thought it was. However, it is not. When we combine this with the Humean Theory of Motivation, we don't get utilitarianism, we get substantially what Hume defends in the Enquiry into the Principles of Morals.

You wrote:

On the so-called "Humean Theory," Jim's desire to avoid pain is, by itself, no reason for Sara to do anything unless Sara cares about Jim or something to that effect.

Yes, this is true.

On the account given so far, where Jim and Sally each only have an aversion to their own pain, Sally has no reason to avoid any action that causes Jim pain. Sally only has an aversion to her own pain. So, if Sally can avoid the slightest scratch on her finger with an action that would cause Jim to suffer excruciating pain, Sally has reason to (and no reason not to) perform that action.

But, Jim - if he can find a way - has a reason to cause Sally to “care about Jim or something to that effect.” Jim knows that if he can pull this off, he can cause Sally to avoid actions that would cause him excruciating pain, and his own aversion to excruciating pain means that he has a reason to try to pull this off.

Let us assume that Jim discovers a drug that will cause Sally to acquire an aversion to causing pain. Jim has a reason (his own aversion to pain) to cause Sally to take the drug. For example, he has a reason to sneak it into her tea. Once Sally has taken the drug, if she is faced with a situation where she must choose between a scratch on her finger or Jim’s excruciating pain, she now has a reason to prefer the scratch on her finger and save Jim from excruciating pain.

Comparably, Sally has a reason to sneak the drug into Jim’s tea. Indeed, they both have a reason to add it to the community water supply. It means that each might experience slight pains whenever the alternative is to cause excruciating pain to others, but each would be surrounded by people who would choose a slight pain for themselves over excruciating pain for others, with the agent being the one potentially saved from excruciating pain.

The Reward System

Now, let's throw away the drugs.

Instead of drugs, let us assume that the beings in this example have something like a reward system. The way this system works is that one can create an aversion to causing pain in others by rewarding/praising those who choose options that avoid causing pain to others, and punishing/condemning those who choose options that cause pain to others. In the same way that Jim had a reason to slip the drug into Sally's tea, Jim has a reason to use rewards and punishments in the ways described. So does Sally. In fact, in a community of beings of this type, we can say that everybody has a reason to use rewards and punishments to promote, universally (in everybody else) an aversion to causing pain to others.

Please note that rewards and punishments can also be used to provide incentives and deterrence. However, this is not the effect that I am concerned with here. I am interested in the ways in which rewards (including praise) and punishments (including condemnation) change behavior through their influence on character.

One key difference between, for example, threats of punishments to control behavior and altering character is on its effects when the agent can get away with performing the action. Jim's threat to punish Sally (e.g., "If you cause me pain, I will cause you even more pain in return.") only provides Sally with a reason not to cause Jim pain insofar as Jim might catch her and insofar as Jim has the power to make good on his threat. However, if Jim can create in Sally an aversion to causing pain to others, then Sally has a reason not to cause pain even when she could avoid punishment.

Of course, what I say here about Jim with respect to Sally is also true of Sally with respect to Jim. Indeed, it is true of everybody in this community with respect to everybody else. One of the things we can say about this aversion to pain is that people generally in this community have strong reasons to (and weak reasons not to) promote universally an aversion to pain by rewarding/praising those who choose options that do not cause pain, and rewarding/punishing those who choose options that do cause pain.

When it comes to these malleable character traits, one of the questions we can ask is, "What reasons are there for promoting this particular trait - for praising those who have it and condemning those who do not?" Referring to the alternative account of "reasons there are" that I defended above, "there is a reason to promote this trait" implies "there is a desire that would be served by promoting this trait."

We could, if we so choose, divide the desires that would be served by promoting this trait into four categories. (1) The agent's own desires that would be directly served by promoting this trait. (2) The agent's own desires that would be indirectly served by promoting this trait. (3) The desires of others that would be directly served by promoting this trait. (4) The desires of others that would be indirectly served by promoting this trait.

This, in Hume's language, would be (1) pleasing to self, (2) useful to self, (3) pleasing to others, and (4) useful to others.

This, in turn, is consistent with, "There is a reason to promote this character trait" implies "There is a desire that would be served by promoting this character trait."

This is still consistent with the claim that an agent has a reason to promote a character trait only if he has a desire that would be served. But, if he does not have such a desire, other people have reasons to give him one.

Not Utilitarianism

In closing, I would like to note that this is not a utilitarian system. The idea that maximizing utility is a reason for action - that it is a goal worthy of pursuing - never occurs. If utility is maximized as a result, it is an unintended side effect, never valued for its own sake.

It may be that people generally have reasons to promote, universally, a desire to maximize utility, and to employ their tools of rewards and punishments accordingly. But even if they promote this desire, it will be one desire among many. It will find itself in constant conflict with hunger and aversion to pain, concern for one's family, an aversion to lying and to breaking promises, a desire for sex, a love of reading and writing about issues in philosophy.

Consider, for example, what this has to say about Robert Nozick's Utility Monster. His utility monster is a creature that gets huge amounts of utility from actions that reduce others to misery. Still, the monster's utility greatly exceeds that of the suffering caused. That creature may have a reason to reduce others to misery. However, people generally have no reason to create such a monster through their use of praise and condemnation, and many and strong reasons to prevent its creation. Once created, people generally have no reason to serve its interests.

Or consider Derek Parfit's repugnant conclusion. This is the idea that a possible world with a huge number of people whose lives are barely worth living can be "better" in terms of overall utility than a smaller number with higher quality of life. On the model here, people generally have more reason not to create more people when it will make their lives worse off than to create such people. Indeed, they have reason to discourage others - to promote an aversion to - adding to the overall population.


I agree that, at first glance, it appears that "If there is a reason, then there is a desire" is going to lead to some form of utilitarianism. However, when we add Hume's theory of motivation, we don't get utilitarianism. We get (a version of) Hume's own moral theory as described in An Enquiry Concerning the Principles of Morals. On this account, we look at the "reasons there are" for promoting certain character traits - reasons that serve not only the desires the agent has, but the desires that other agents have as well.

No comments: