Sunday, December 23, 2018

The Central Problem of Morality - Solved

In this post, I am going to solve The Central Problem in Morality.

That is actually what it is called, according to the Stanford Encyclopedia of Philosophy (SEP) - as described in its entry, "Reasons for Action: Internal vs. External"

The problem goes like this:

First, we accept the Revised Humean Theory of Reasons:

(1) The Revised Humean Theory of Reasons: If there is a reason for Agent1 to do something, then Agent1 must have some desire that would be thwarted by doing it, which is the source of Agent1's reason.

We combine this with moral rationalism:

(2) Moral Rationalism: An action is morally wrong for Agent1 only if there is a reason for not doing it.

Here, we must be careful to distinguish between "wrong all things considered" and "prima facie wrong". Proposition (2) describes a prima facie wrong in that it is possible that an agent can have a reason not to do something (a reason to keep a promise) that can, on occasions, be outweighed by more and stronger moral considerations (preventing nuclear war). This is just something to keep in mind as we proceed.

If we combine these, we get:

(1) + (2) An action is morally wrong for Agent1 only if Agent1 has some desire that would be thwarted by doing it, which is the source of her reason not to do it.

Let me explain using the example that the SEP provides:

The Revised Humean Theory of Reasons states that if there is a reason for Hitler not to order genocide, then Hitler must have a reason that would be served by not doing it, which is the source of Hitler's reason. Moral rationalism says that ordering genocide is morally wrong for Hitler only if there is a reason for not ordering genocide. From this, we get the conclusion that ordering genocide is morally wrong for Hitler only if Hitler has a desire that would be served by not ordering genocide.

Now, we take this, and we add a third principle: Moral Absolutism:

(3) Moral Absolutism: Some actions are morally wrong for any agent no matter what motivations and desires they have.

Or, on the case of our example, ordering genocide is morally wrong for Hitler no matter what motivations and desires Hitler has.

The Central Problem of Morality, then, yields a contradiction - there seems no way that (1), (2), and (3) can all be true at the same time. (1) and (2) combine say that the moral wrongness of genocide is linked to Hitler's desires, (3) states it is independent of Hitler's desires. Since wrongness cannot be both independent and independent of an agent's desires, we have to give up one of these claims.

As I argued in "The Revised Humean Theory of Reasons Further Revised (RHTRFR)" I would give up on (1). Which happens to correspond with a rather specific interpretation of (2).

Recall that the RHTRFR distinguishes between "there is a reason for Agent1 to do something" and "Agent1 has a reason to do something." It equates "there is a reason" to "there is a desire that would be served" and "Agent1 has a reason" with "Agent1 has a desire that would be served. More specifically:

(1') The Revised Humean Theory of Reasons Further Revised (RHTRFR): If there is a reason for someone to do something, then there must be some desire that would be served by doing it, which is the source of that reason. And if Agent1 has a reason to do something, then Agent1 must have some desire that would be served by doing it, which is the source of that reason.

Now, this requires a reinterpretation of (2). More specifically, we should split up 2 in the same way we split up (1). We can keep (2) as is:

(2) Moral Rationalism: An action is morally wrong for Agent1 only if there is a reason for not doing it.

Then add (2') just for reasons of clarity. We are not going to actually use (2') in solving The Central Problem of Morality, but it will be useful to have it roaming around in one's mind for proper context.

(2') Practical Rationalism: An action is practically wrong for Agent1 only if Agent1 has a a reason for not doing it.

Again, we are talking about a prima facie wrong, not an all-things-considered wrong.

So, now that we have this distinction, we can now combine (1') with (2) and get:

(1') + (2) An action is morally wrong for Agent1 only if there are desires that would be thwarted by Agent1 doing it which are the source of reasons not to do it.

Note that, in this retelling, the desires need not be Hitler's desires. Indeed, they can be the desires of the Jews and others that would be thwarted through genocide, which makes moral sense.

At this point, one may ask questions about how we balance the reasons of the Jews against Genocide with Hitler's reasons for genocide. These are important questions that ought not to be ignored. However, I do not have the space here to address them. Consequently, I will save that discussion for a future post. Very quickly: morality is not concerned with the desires that Agent1 has but, instead, with the desires that Agent1 should have - and the desires that Agent1 should have are the desires that people generally have reasons to cause everybody to have. People generally have reason to cause everybody universally to have an aversion to committing genocide. It is in virtue of this fact that Hitler's desires to commit genocide are identified as evil desires.

Setting that problem aside for a moment, we can see that we at least have an answer that is compatible with (3)

(3) Moral Absolutism: Some actions are morally wrong for any agent no matter what motivations and desires they have.

The wrongness of Hitler ordering genocide does not depend on Hitler's having a desire that would be thwarted by ordering genocide. It is grounded on the fact that people generally have reasons to cause Hitler to have an aversion to performing genocide. More specifically, it is grounded on the fact that people generally have reasons to condemn and to punish people like Hitler. And that is what makes his actions wrong.

The solution can be found, as I have argued previously, in simply recognizing the proper distinction that relates "There is a reason for Agent1 to do something" with "There is a desire that would be served by Agent1 doing something" and relates "Agent1 has a reason to do something" with "Agent1 has a desire that would be served by doing something."

No comments:

Post a Comment