Wednesday, March 02, 2016

Michael Smith: The Rational Requirement of Universalization

If our intrinsic desires themselves are subject to rational requirements, then there must be rational requirements beyond [means-ends rationality]. (Smith, Michael, "Beyond the Error Theory," in A World Without Values: Essays on John Mackie's Moral Error Theory, (Richard Joyce and Simon Kirchin, eds.), 2010.)

I hold that there are no rational requirements for intrinsic desires -- that there is no way to evaluate desires - other than "means-ends rationality". That is to say, the only thing we can say about a desire, in terms of recommending for or against it, has to do with the degree that it tends to fulfill or thwart other desires.

In a section of Michel Smith's article cited above, Smith will examine a suggestion that this view is mistaken, derived from the writings of the Richard M. Hare.

It is a principle of Universalization.
U: RR (if a subject has an intrinsic desire that p , then either p itself is suitably universal or the satisfaction of the desire that p is consistent with the satisfaction of desires whose contents are themselves suitably universal).
To be fair, Hare did not present this principle as a principle governing the rationality of desires. Hare presented this as a principle of morality. The further claim is that only the desire to maximize happiness and minimize suffering is universal in this way.

In other words:

To be rational, our intrinsic desires must have contents that are themselves suitably universal – they must mention no particulars – or, at any rate, their satisfaction must be consistent with the satisfaction of desires whose contents are themselves suitably universal.

I wish that Smith had provided an example to help explain these principles. However, he did not, and that leaves me a bit lost as to whether my understanding is even in the right neighborhood.

For example, it makes sense to say that if an agent has a desire that P, that this is fulfilled by any state in which P is true. A person who wants a steak should be equally satisfied by any of a set of identical states. It would make no sense for him to say, "I like them all except Number 3," when there is nothing to distinguish Number 3 from any other steak.

However, if this is what is meant by this principle of universalization, I cannot see how it has anything to do with the rest of the discussion. I can't see how it provides even a hint of an challenge to Mackie's claim that there is no rationality of ends.

Instead, Hare seems to be asserting that this principle of universalization requires that we abstract out the particular people in a state of affairs. 

In a hypothetical situation involving three people, for example, this principle of rationality requires that we look at the situation from the point of view of all three people with their individual desires. The only desire that survives this test is a desire for maximum happiness and minimum suffering.

Smith alternatively describes the conclusion as saying that this universal desire would be "to maximally satisfy the desires of all three parties". This is not the same thing as the desire to maximize happiness and minimize suffering, but the difference need not concern us here.

Now, one question to answer is: "Who has this desire for maximum desire satisfaction, and how did he get it?"

Smith discusses and immediately dismisses the possibility that by imagining oneself in the position of all three participants in this event, one automatically acquires the desires of all three participants in this event - which clearly does not happen.

Another option is that there is a principle of rationality that is violated if one imagines oneself in the position of somebody else if one does not acquire the desires of that other person.

Unfortunately, this second option requires that we augment the principle of universalization with a new, second principle of rationality - the principle that states that imagining a situation from the point of view of others rationally requires the adoption of their desires.
Regardless of the merits of this second principle of rationalization, the mere fact that it is required tells us that the principle of universalization is insufficient. We need something else to get us from this imagined universal perspective to a state where an agent has an actual desire that influences real-world action.

Here, it is important to point out that Hare presented this universalization principle not as a principle of rationality, but as a principle of morality. This form of universalization does not tell us what rationality requires, it tells us what morality requires. The person who fails to act appropriately on this principle is not irrational, but immoral.

Here, i want to agree with Hare's distinction. I, too, deny that there are any principles of rationality applicable to evaluating desires-as-ends. However, it still makes sense for us to evaluate desires-as-ends in terms of the degree to which they tend to fulfill or thwart other desires. Those answers tell us whether and to what degree we have reason to promote certain desires-as-ends and weaken or inhibit others.

You cannot infer from the fact that there is no rationality of desires-as-ends that there is no morality of desires-as-ends.

Taken as a principle of morality rather than as a principle of rationality, I still have objections to raise against Hare's universalization. It fails, on practical grounds, to provide an answer to the question, "Who gets this desire and how does he get it?" My answer is that desires are evaluated according to their usefulness, and that useful desires are promoted while harmful desires are inhibited using the social tools of rewards such as praise and punishments such as condemnation. Because we cannot reason a person into goodness, we must use other tools. 

7 comments:

ScottF said...

I don't want get into a long discussion again, especially if it is just abandoned, unresolved. But I am very interested in moral universalization principles, so I wanted to comment on this post. I am myself often unsatisfied with other philosophers' discussion of universalization principles, even by those who use them in their theories. E.g., I consider Hare's idea (taken from CI Lewis) that we must imagine ourselves living the lives or experiencing the pleasures/pains of all persons affected by our act, to be a heuristic, not the fundamental principle grounding the universalization requirement.

You say:

"I hold that there are no rational requirements for intrinsic desires -- that there is no way to evaluate desires - other than "means-ends rationality". That is to say, the only thing we can say about a desire, in terms of recommending for or against it, has to do with the degree that it tends to fulfill or thwart other desires."

I *almost* agree, with the main issue concerning connotations rather than what is explicitly said. I agree that desires are irrational just to the extent that they involve the fulfilling or thwarting of other desires. I would disagree with the presumption that this can *only* happen when the actual empirical presence of the desire causally thwarts this same desire. I think it is also irrational to desire something which would thwart some desire, including itself. I think this leads to a universalization principle. Note however that I prefer the term "value" (as a verb) over "desire," since sometimes the latter is inappropriate for things the agent cannot causally influence, and also use approve/endorse, which is weaker than desire: desire for X means trying to bring it about when absence, approval of X is simply accept X as OK if it is present.

This is Kantian in one sense: conflicting values can generate a contradiction in the will, even if they don't cause desire frustration. I disagree with Kant(ians) that this contradiction is generated on non-instrumental grounds; indeed, his very examples generally involve the valuation of instrumental failure.

E.g.: if I want to grow crops, it is irrational for me to value the absence of rain, or the sun's growing dark. Of course, I might have other desires which would be satisfied by these things (I don't want to get wet, etc.), but set that aside: I'm talking ceteris paribus here. The fact that my desire for a dry spell has absolutely no causal influence upon the weather is irrelevant; I am irrational for valuing the absence of adequate means to my valued ends.

Now, if I approve of or endorse my wanting to grow crops--that is, have a second-order pro-attitude towards my being an agent with this desire--then I approve of myself, again ceteris paribus, just insofar as I satisfy the description "an agent who wants to grow crops." Perhaps (indeed, very likely) I only value myself doing this given certain background conditions C, including some so obvious that I don't consciously think of them, like "...when food is needed, is profitable to sell or good to eat, when I have the wherewithal to grow them, etc." So then I approve of myself insofar as I meet the description "an agent in C who wants to grow crops."

But then I implicitly approve of other agents in C who want to grow crops, because the same predicate applies to them. If I say no, I just meant myself, then I must ask: why I am special? If it's because I have traits T, then go back and add to C, "...and the agent has traits T," and do over. If it's because I am me, i.e., I am self-identical--well, so is everybody, so again they satisfy this as part of C.

ScottF said...

Part 2 [be sure to read first comment above; as I noted in our last exchange, I fear you sometimes only saw the last comment when I had to split long posts, as you sometimes asked me questions I had previously answered or didn't respond to questions/points made in earlier segments of my comments.]

Or, perhaps, you just don't make any such second order approval. You just desire to grow crops, period; you don't approve of youself having this desire. OK...but then if someone else grows crops and this in any way interferes with your doing the same (or something else), you can try to fight them, but you can't make a second-order judgment that what they are doing is in any way wrong, better or worse than anything you're doing. In other words, you can act like an animal with only first-order desires, not a human who makes second-order, moral judgments *about* some of your and others' first-order desires, whether intrinsic or extrinsic. You can, perhaps, be or become such a creature; but you cannot judge being or becoming it as something desirable, approvable, etc., because again that would be a second-order valuation.

With growing crops, such conflicts would be rare; which is precisely why it's usually morally OK. If the intrinsic desire is "I like to kill people," valuing the same in others leads to contradiction--whether this valuation causes others to so act or not--then conflict is inescapable, and the desire is wrong because it is non-universalizable. You cannot make a coherent second-order approval of yourself having and acting on this desire, because you cannot coherently approve of other agents acting on exactly the same type of desire given the circumstances in which you have supposedly approved of yourself doing so.

Alonzo Fyfe said...

I am sorry about letting the last conversation drop unresolved. However, 7 lengthy comments fired off in such rapid succession . . . I simply could not find the time to write a comprehensive response. I started to several times, but I had to take care of other things as well.

Anyway, I will definitely try to get a response to you here. Give me some time . . .

Alonzo Fyfe said...

"E.g.: if I want to grow crops, it is irrational for me to value the absence of rain, or the sun's growing dark."

Is 'irrational' the correct word to use here?

Let's take a related case. A person has suffered severe burns. The treatment for burns is painful. The aversion to pain means that the agent has a reason to avoid the treatment.

It seems to me that we can say that the aversion to pain is in conflict with the desire to obtain the benefits of treatment, but to call the aversion to pain irrational - even if limited to the aversion to that pain caused by the burn treatment - is a misuse of language.

Now, it would make sense to say that it would be irrational for the agent to choose to be averse to pain, but that is something different.

Similarly, the farmer's aversion to rain or desire to see the sun go dark would be in conflict with the desire to grow crops, but the concept of rationality does not become applicable until the agent has a choice to make.

Each of us is a swarm of conflicting desires. If "having conflicting desires" implies "being irrational," then we are all hopelessly irrational.

Alonzo Fyfe said...


"If I approve of or endorse my wanting to grow crops . . . then I implicitly approve of other agents in C who want to grow crops."

I do not think that this is true.

First, I can think of examples such as, "If I approve or endorse my wanting to marry Sam, then I implicitly approve of other agents wanting to marry Sam."

Second, as a member of a community, we have reasons NOT to want everybody in a community performing the same job. I can be a farmer - and be proud of the fact that I am a farmer. However, I do not want EVERYBODY to be farmers. We need doctors, teachers, builders (engineers), smiths who make the tools, judges, and the like.

(NOTE: This is the standard on which I distinguish the morally required from non-obligatory permissions. Moral requirements concern desires that there is reason to promote universally - across a whole community - such as a desire to help those in dire need and a desire to repay debts. Non-obligatory permissions have to do with areas where we have no reason to promote a uniformity of desires - where we sometimes have reason to promote diverse interests such as with respect to occupation and mating partners.)

ScottF said...

This time it seems that I was the one to drop the ball, apparently because I didn't check the little toggle box to get a notice of your follow-up comments and so mistakenly presumed there were none until I looked again just now.

Your example about a painful treatment for burns ignores my point that the irrationality I describe was "ceteris paribus." Yes, we often have conflicting desires; but they are not usually a hopeless conflicting mass. If the treatment were *not* painful, and conflicted with no other desires, then (presuming the burn is also painful, or interferes with some other desires), then it is indeed irrational to value the absence of, or any interference with, the delivery of the treatment. Of course, almost any action conflicts almost a tiny bit with some of my desires: since I usually desire to save time and resources, which most actions or events expend (and even desiring that others spend time and resources to help me with my end E interferes with a possibly conflicting desire that they instead spend those to help me with my end F, even if E and F are not otherwise in conflict). But again, there's nothing "hopeless" about this in a great many cases; we can often judge that my desire for E is stronger than for F, or that I want to satisfy both and therefore value some division of resources between each, etc.

You assert that "the concept of rationality does not become applicable until the agent has a choice to make." I disagree; or perhaps I include the power to choice to include "the choice to value X vs. Y"--though this is clearly not your assumption. I see no reason to so limit the concept of rationality in this way. If you do, by a fiat definition of the word, then I can redescribe my point as involving not irrationality, but valuational inconsistency or something like that, and then our dispute becomes merely verbal (perhaps). But then your arguments against those who use the concept of rationality differently become less interesting, I think.

Now to your counter-examples to universalization. The second (approving of myself in C, and hence anyone in C, becoming a farmer) points to the fact that if my disposition to farm in C is rational, C includes some implicit background conditions like "most people don't want to be farmers, or or not likely to be farmers for other reasons, etc." Then universally approving of anyone's farming in C guarantees that not everyone will farm. A disposition to farm in C where C is just "I like farming" is indeed irrational for just the reasons you state; it would irrational to insist upon farming when everyone else is already trying to do so, and you are thereby contributing to a non-diverse and therefore impoverished economy, society, etc., and implicitly approving of others do the same. In the unlikely case that everyone really preferred farming as a career, the rational thing to do is accept some procedure other than personal choice to determine each person's career (lottery, etc.) This involves some frustration of desires; but not as much as letting anyone do as they please would (this is yet another example where conflict of desires is possible, but not hard to solve). [Korsgaard pushed this kind of response; Herman & Wood disagreed, but I disagree w/ them.]

ScottF said...

First example: "If I approve or endorse my wanting to marry Sam, then I implicitly approve of other agents wanting to marry Sam." Again, we need to look at possibly implicit conditions of C. Do you want to marry Sam even if Sam doesn't reciprocate? You can want that; but acting to make it happen except in the most mild ways is not very moral. But if reciprocation is an implicit condition, and Sam's reciprocation is unique, you can easily universalize your disposition (and if it's not--Sam prefers another--then your acceptance of your failure is rational/moral, while continuing to pester Sam to choose you is probably not). If Sam's reciprocation is not unique, and you want to marry her anyway, then you must either be disposed to accept being part of a group marriage, or is not very rational--since your continued pursuit of uniquely marrying Sam in light of Sam's polyamory is cruisin' for a bruisin', as they say.

There are other possibilities; but the point should be clear that you need to flesh out the precise disposition (input conditions C and output valuational behavior B) in question to show that it is not universalizable; I don't think you'll be able to give a detailed example which is not universalizable but also intuitively moral.