Wednesday, September 26, 2018

Korsgaard 02: Prudence

As I read through The Normativity of Instrumental Reason by Christine M. Korsgaard, I found many things that I wanted to comment on, so I am going to devote a few posts to this subject.

I would like to start with this quote:

Moral requirements, [empiricists] think, must therefore be given a foundation in one of two ways. Either we must show that they are based on the supposedly uncontroversial hypothetical imperatives—say, by showing that moral conduct is in our interest and so is required by the principle of prudence—or we must give them some sort of ontological foundation, by positing the existence of certain normative facts or entities to which moral requirements somehow refer.

I do not know if I qualify as am empiricist. This seems to apply to me, but I reject both options. I do ground moral requirements on hypothetical imperatives - but on the hypothetical imperatives that others have to praise and condemn, not on the agent’s own hypothetical imperatives. Thus, not on “showing that moral conduct is in our interest and so is required by the principle of prudence”. Indeed, I hold that moral conduct is not always prudent - but that it “should be” prudent in that people generally have reasons to promote those interests that will make it prudent.

But, even on the subject of prudence itself, there is an important ambiguity.

Korsgaard follows the above quote with the following accusation:

Part of the problem is that empiricist philosophers and their social scientific followers have obscured the difference between the instrumental principle and the principle of prudence by making the handy but unwarranted assumption that a person's overall good is what he “really” wants.

I would like to assure the reader that I have not done this. I do not recognize a difference between what an agent wants and what an agent “really wants”. I hold to the Humean claim that an agent has a reason to do X only if the agent has a desire that would be served by doing X . . . full stop.

I do recognize a difference between what an agent believes that she wants and what she wants. That is to say, the belief, “I want X” can be false. This can happen because we have no direct access to our own desires and, consequently, must theorize about the. Even though we have a great deal of evidence about our own desires and an incentive to get the facts straight, we sometimes make mistakes. Another reason for mistaken beliefs is that we use the term "want" to refer to things we desire in terms of a relationship of means to ends, and our beliefs about the relationships between means and ends can be mistaken.

Still, the only sensible interpretation of “I really want X” is “‘I want X’ is true.”

This is not to deny that we have a concept of "overall good". In the same way that you can give the location of an object by describing its relationship to any other object, you can describe some action in terms of its relationship to many other desires. Consequently, we can describe the relationship between an action and an agent's current desires, an agent's current and future desires, the desires that the agent prudentially has reasons to cultivate, the desires that people generally have reasons to cultivate universally, and the like. I fear that our term "prudential" is ambiguous among these various relationships - and the agent must determine any given use by looking at the given context.

I would still limit "has a reason to do X" to "has a desire that would be served by doing X".

And I would argue that "moral requirements" have little to do with what an agent "has a reason to do". They have to do with what an agent "should have a reason to do" - which, in the moral sense, have to do with the reasons that people generally have reasons (in the above sense) to promote universally. As for "overall good," I think I am inclined to agree with Chris Heathwood's statement that it has to do with having a life where one's desires have been fulfilled at the time that one had them.

Korsgaard is going to have problems with this thesis. She is going to present cases where a person "has a desire that would be served by doing X" where she is going to want to deny that the agent "has a reason to do X". I will need to look at those in detail. Though . . . spoiler alert . . . my answer to those cases will be to employ the distinction between the reasons an agent has (practical 'ought') and the reasons an agent should have (moral 'ought') - and I will make no attempt to reduce one to the other, either directly or through an intermediary concept of "really want" or "overall good". In fact, the two are often in conflict.

No comments: