Thursday, June 02, 2011

Moral "Should" Statements

A couple of weeks ago, I began what one member of the studio audience has called a reboot of desirism by talking about 'should' statements.

(1) The only sensible answer to a "should" question (e.g., Why should I do X?) is to present the agent with some reason for action that exists, or some fact that ties the action or its consequence with some reason for action that exists.

That was the last time I talked about the word "should" in its prescriptive sense.

Instead, I went into a series of posts imagining that you are an intentional agents motivated to realize that which you desire - surrounded by other intentional agents motivated to realize what they desire.

Under these assumptions, I asked what you could do as an agent with a desire that P to get another agent with a desire that Q to realize P - or at least refrain from realizing not-P.

I discussed four options:

(1) Bargaining: If you help me to realize P, I will help you to realize Q.

(2) Threatening: Unless you act to realize P, I will act to realize not-Q.

(3) Belief modification: Give the agent with a desire that Q those beliefs that will motivate him to try to realize Q with actions that would realize P.

(4) Desire modification: Instead of taking his "desire that Q" as a given, modify those desires so that the agent has desires that he will tend to realize through actions that realize P.

For example, I argued that in a simple bargain, if you should realize your side of the bargain before your counterpart realizes his, your counterpart will lose all motivation to complete their part. Realizing P will cease to be instrumental to realizing Q. Realizing P will only be completed if your counterpart has some additional motivation for realizing P after you have done your part to realize Q.

I discussed the options of reputation and an aversion to breaking promises - of which the second would be the more reliable motivation. Your "desire that P" implies a motivating reason to seek out bargains with others who you have reliably determined have an aversion to breaking promises.

And what is true of you is true of those other agents.

These are facts about the world - implications of having a desire that P bargaining with an agent with a desire that Q.

Yet, this discussion and others like it, I did not draw any conclusions expressed in the form of what you "should" do. I did not prescribe any action - I simply described the actions that were compatible with your desire that P.

Now, I want to bring back the claim that "should" has to do with "reasons for action that exist," and desires are the only reasons that exist. Should statements ARE descriptions of actions compatible with given desires that exist.

When I say, "You should do X", a sensible question to ask is, "Why should I do X?"

The sensible answer to this question is for me to describe the relationship that exists between the action that I am recommending and the reasons for action that exist. Reasons that do not exist are not relevant to what you should really do. And desires are the only reasons for action that actually exist. So, my answer to your "should" question is to relate the action to various reasons for action (desires) that exist.

At this point, we can divide these reasons for action that exist (desires) into two groups. There are reasons for action that you have, and reasons for action that exist but that you do not have. This second group of reasons for action are the reasons that other intentional agent has. It refers to desires that exist that are not your own. They are real. They exist. However, they are not yours in the same way that other hands and feet are real, but are not yours.

You are only going to be directly motivated by the reasons for action (desires) that you have - not by all of the reasons for action (desires) that exist. Your desires motivate your actions. The desires of other people motivate their actions. A claim that a particular reason for action exists does not motivate you to act directly unless that reason for action that exists is one that you have.

However, this is not all that can be said about reasons for action that exist - but that you do not have.

While those reasons may not motivate you directly, they are reasons for other people to act in particular ways that will affect you. They are reasons that exist for other people to bargain with you or threaten you. They are reasons that exist that determine whether they will keep or break bargains, tell you the truth, or act so as to realize not-P.

For the purposes of this series, one important fact is that they are reasons that exist for them to act so as to modify your desires - to give you different reasons for action. That is to say, they have reasons to use reward (such as praise) and punishment (such as condemnation) to trigger your reward-learning system in a way so as to create and strengthen in you certain desires, and to weaken or eliminate others.

In that sea of reasons for action that exist, there are a great many and strong reasons for promoting (or inhibiting) some desires - such as the desire to keep promises, to tell the truth, to refrain from threatening those who do not make threats, and the like. I can make real-world claims about the desires you have or could have and the sea of reasons for action that exist for offering rewards and condemnations.

When I say, "You should not lie" in this sense - the moral sense - I am not saying that you HAVE reasons not to lie. I am saying that there exist a great many and strong reasons for people to cause you to have a reason not to lie. I am saying that they have many and strong reasons to offer rewards (such as praise) to those who are honest, and to offer punishments (such as condemnation) to those who lie.

But I am not just making these factual claims. I am also, at the same time, giving praise to those who are honest, and condemning those who lie. I am not only stating that reasons exist to trigger the reward-learning system so as to promote honesty and discourage lying, I am trying to trigger the reward-learning system so as to promote honesty and discourage lying.

There are theories that say that moral claims aim to point out some fact that, itself, would motivate an agent to behave differently. Those claims that exist. Beliefs only interact with the desires that an agent already has - they do not create new desires or modify existing desires. The reward-learning system modifies desires. But the reward-learning system does not respond to facts. It responds to rewards (such as praise) and punishments (such as condemnation).

You can respond sensibly to my claim that you should not lie by providing evidence that people, in fact, do not generally have reasons to praise those who are honest and condemn those who are dishonest. Or, you can respond to a claim that homosexual acts are wrong by pointing out that people do not really have reasons to praise those who refrain from homosexual acts and condemn those who engage in such acts. Thus, the praise or condemnation you offer is not, in fact, praise or condemnation that people actually have genuine reasons to give. It is unjustified praise and condemnation, grounded, ultimately, on the false beliefs or malicious interests (interests or desires that people generally have reason to condemn) of those who provide it.

You may respond that this is not what you mean by the word "should", or that you do not agree with the claim that this captures how the word is actually used. Neither of these counter-claims are actually worth a great deal of effort. Neither proves that the substantive claims of this theory are false. They are merely disagreements over the language used in expressing those substantive claims, not the substantive claims themselves.

Regardless of the words people actually use, the substantive claim that people generally have many and strong reasons to use rewards (such as praise) and punishments (such as condemnation) to promote a desire to be honest and an aversion to lying remains true. The fact that you - and people generally - often have reason to bargain only with those who have an aversion to breaking promises remains true. They are true no matter what language you decide to speak when making these claims.

3 comments:

Anonymous said...

I'm reading chapter 20 of "Desire Utilitarianism" but your page is down: http://www.alonzofyfe.com/cgi-sys/suspendedpage.cgi

Visitors, we are sorry, however, this site is experiencing difficulties at this time. Please return later.

Austin Nedved said...

When I say, "You should not lie" in this sense - the moral sense - I am not saying that you HAVE reasons not to lie. I am saying that there exist a great many and strong reasons for people to cause you to have a reason not to lie. I am saying that they have many and strong reasons to offer rewards (such as praise) to those who are honest, and to offer punishments (such as condemnation) to those who lie.

There seems to be a problem here. If I personally have no reasons not to lie, and doing so would overall benefit me, there can be no possible reason why I should not lie. (Suppose I am unbothered by the negative consequences that others would inflict on me for lying.) This results in an absurd situation in which it is "reasonable" for me to lie, while it is also reasonable for others to try to prevent me from lying. It is absurd to suggest that reason could ever demand that two agents pursue mutually exclusive goals, or that it would ever be "reasonable" for two different people to take actions that "clash" with each other in such a way.

Fortunately, I think there is a solution to this problem, and this solution involves distinguishing between two different sorts of "oughts": "ought" in the nonmoral sense, and "ought" in the ethical sense. We are using the term nonmorally when we say something like "If you want your car to have a long life, you ought to change the oil frequently." "Ought" is being used in the ethical sense when we say something along the lines of "I understand that, while murdering that person might benefit you, you ought not to kill him."

"Should" in the first sense refers to what you "should" do if you want to achieve a goal that has no real bearing on anything ethical. But what does "should" the second, morally relevant sense, refer to? This, unfortunately, is not immediately clear. Think of what Heidegger said about our understanding of the meaning of the word "truth":

But in calling for the actual 'truth' we must already know what truth as such means. Or do we know this only by 'feeling' and in a general way”? But is not such vague 'knowing' and our indifference regarding it more desolate than sheer ignorance of the essence of truth?

Something very similar is true with respect to our understanding of the moral meaning of the word "ought." Literally, all that we can say initially is that it is what one "should" do; and, as we can see, this does not help us understand its meaning at all. So how do we go about clarifying its meaning?

The first step we should take here is to try to understand where the meaning of this term can be found. It seems that we contain within ourselves the clearer understanding of the term that we so desperately seek. But while we may contain this deeper understanding within ourselves, this comprehension does not exist within our conscious mind. So it seems that what we need to do is to bring it to consciousness. This must be done through language, through words.

[Your blog will only let me leave comments that are under 4,096 characters, and my comment is much longer than that. See below for the other part(s) of it.]

Austin Nedved said...

You wrote:

You may respond that this is not what you mean by the word "should", or that you do not agree with the claim that this captures how the word is actually used. Neither of these counter-claims are actually worth a great deal of effort. Neither proves that the substantive claims of this theory are false. They are merely disagreements over the language used in expressing those substantive claims, not the substantive claims themselves.

What I am arguing (or what I will argue in just a moment) is that the way you are using the term "ought" here is not consistent with our implicit understanding of what it means. I can define "justice" to mean "a state of affairs in which I win the lottery," but surely that is not consistent with our implicit understanding of what justice is!

Perhaps if we wish to understand what "ought" in the moral sense means, we should look to how we use the term in the nonmoral sense. In the latter sense, we generally take "should" to mean "what you must do if you want to satisfy your desire for state of affairs S." "Ought," therefore, refers to what we must do if we wish to satisfy our desire for a given state of affairs. In light of this, it would not be implausible to suggest that the morally relevant sense of the term has something to do with satisfying desires as well.

The moral sense of "ought" transcends all other oughts. No matter how badly someone might want to take an action that is unethical, she "ought" not to do it. If both senses of "ought" have something to do with desires, then it must be the case that the ethical sense of the concept would result in one's deepest and strongest desire going unsatisfied. It should go without saying what what we desire the most is justice.

"Ought" in the morally relevant sense therefore refers to what we must do if we are to satisfy our desire for justice. When we say "action X might benefit you, but you still ought not to do it," we are saying "X might benefit the part of you that does not desire justice, but you still should not take action X, because if you do, your desire for what is just and what is right will go unsatisfied."

Determining what one "ought" to do therefore requires an understanding of what justice is. And if you've been paying attention, you will realize that, like "ought," "justice" is another one of those extremely difficult to define concepts. It seems to me that any theory of ethics which puts as great a deal of emphasis on satisfying desires as does yours, cannot possibly afford to be neglectful of justice.