A member of the studio audience pointed me to another article that Sean Carroll wrote against Sam Harris' claim that there are moral facts - that there is a fact of the matter regarding what is right and wrong, virtuous and vicious.
Carroll wrote: You Can't Derive 'Ought' from 'Is'
What would it mean to have a science of morality? I think it would have to look something like this: Human beings seek to maximize something we choose to call "well-being" (although it might be called "utility", "or "happiness", or "flourishing", or something else).
Here, I want to repeat an objection to Carroll's argument that I mentioned last time. Carroll needs to distinguish between having objections to Harris' theory of morality and with the possibility of scientific morality in general. Proving that Harris is wrong no more proves that we cannot have a science of morality than proving that the ancient Greeks were wrong concerning the fundamental particles of matter proves that we cannot have a science of chemistry.
In fact, Carroll is wrong to think that a science of morality has to be one of these options. I argue that morality is not concerned with the maximization of any one thing. In fact, I view those theories to be absurd. You cannot come up with any type of decent account of how hundreds of millions of years of evolution has designed the human brain to have only one interest - be it Aristotelian eudemonia, Benthamite pleasure, Millian happiness, Singerian preference satisfaction, or Harrisian well-being.
If somebody wants to explain to me how that happened, I would be interested in hearing their story.
Instead, we evolved a number of different interests. We have an interest in sex and, here, we tend to be disposed to find particular physical features as identifying a preferred mating partner. We have an interest in food - and a stronger preference for some types of foods over others. We also have an interest in drinking. We have an interest in being in a comfortable environment. We have an interest in the well-being of our children and of our associates. These interests can be explained in terms of our evolutionary history. We can account for how evolution favored those with some interests and selected against others - like those with an interest in jumping from great heights, perhaps.
Furthermore, evolution has made our desires malleable. Our environment teaches us to like certain things and dislike others. In this, desires are much like beliefs. The belief that there is a tree over there is not genetic. It is the effect of photons striking the tree and bouncing off, then striking the eye, and being processed in the brain in such a way as to generate the belief, "There is a tree over there."
A brain that is malleable enough to form different beliefs depending on how it interacts with the environment is also capable of forming different desires depending on how it interacts with the environment.
Which means that all of us have some power to modify the desires that other people have by controlling the types of interactions they have with their environment. If we respond to certain expressions of desire through praise and condemnation we have the power to cause others to like certain things they might not otherwise have liked, and to dislike certain things they might not have otherwise disliked.
So, any theory that begins by saying that we are out to maximize something has already ran into problems. These theories can be discarded - or, at least, they have a lot of work to do to prove that they are worth taking seriously.
Which means that Carroll is mistaken in saying that a science of morality has to be a maximization theory and that by defeating maximization theories Carroll has defeated the possibility of a science of morality.
Here, an astute long-term reader of this blog might raise the question, "Isn't this the same thing you are doing with 'desire fulfillment'? Are you not treating desire fulfillment the same way that Jeremy Bentham treated pleasure, and Sam Harris treats the well-being of conscious creatures?"
Desire fulfillment has no value.
Well, it could have value if the right set of conditions are met, but it need not have any value at all. If desire fulfillment has value, then it has value in virtue of the same types of relationships that give value to rocks, paintings, movies, and everything else. It must stand in a particular relationship to reasons for action that exist (desires).
I admit that, for many people, this is a difficult concept to grasp. We are accustomed to thinking about theories in which the author proposes some entity and says that this is the good. This is the thing to which all value adheres. Therefore, it is easy and comfortable to put any new theory one encounters into that model. However, in this case, it is a mistake. Desirism does not talk about maximizing some entity called 'desire fulfillment'. It talks about making or keeping true those propositions that are the objects of our desires.
Okay, let's take a little closer look at what this means.
Let us assume that we have an agent A who has a choice to make between two possible future states. A has one desire - a desire that P. For our example, P = world W is left in a pristine and undisturbed state. In future state S1, A exists and W is left in a pristine and undisturbed state. In future state S2, A does not exist and W is left in a pristine and undisturbed state.
A has no particular reason to choose either world over the other. In fact, he has no basis on which to make a choice. In both possible future states P is true, so his desire is fulfilled. So, both possible future states are equally valuable to A.
We - appealing to our own desires and even to the desires we want to promote in our community, will have a disposition to favor S1 over S2. We may 'feel' as if S1 is the better option. However, that is based solely on the fact that S1 better fulfills our desires. It has nothing to do with how the two states of affairs relate to A's desires. A, who has only this one desire, has no reason to choose S1 over S2. To A, both possible worlds have equal value.
We can imagine cases in which A's presence has instrumental value. We can imagine that A's presence is required to keep other people from disturbing W. However, in these types of cases, we are no longer talking about cases in which A is choosing between S1 and S2 where P is true in both cases. We are talking about cases in which the agent is choosing between S1 (I am here and am keeping W pristine and untouched) and S2 (I am not here and others have invaded W.
It is perfectly consistent with the theory to hold that, if these were the options, S1 would have more valuable than S2.
Now, if I were treating desire fulfillment the way Bentham treated pleasure, or Mill treated happiness, or Harris treats the well-being of conscious creatures, I would have to say that S1 has more value than S2. S1 contains desire fulfillment, while S2 does not. Recall that desire fulfillment is a state in which an agent has a desire that P, there is a state of affairs S, and P is true in S. S1 is the only future state in which there is an agent who has a desire that P. So, S1 is the only state that contains desire fulfillment.
However, desire fulfillment is not what has value. For an agent with a desire that P, states of affairs in which P is true have value. For an agent with one desire - a desire that P - he has reason to bring about S if and only if P is true in S. P is true in both S1 and S2, so the agent has no basis for making any type of choice between them.
In order to arrive at the conclusion that S1 has more value than S2 we must introduce a second desire. Let us introduce another agent, B. B has a desire that Q where Q = "desire fulfillment exists". In this case, B has reason to choose S1 over S2, because Q is true in S1, but not in S2.
B has reason to try to persuade A to choose S1. Yet, given the assumptions we have made in this example, B is going to have a hard time doing this. He cannot bribe A. The only thing A cares about is that the world is left in a pristine state and that is going to happen regardless of whether A chooses S1 or S2.
B does have the option of threatening A. However, the only threat that holds any promise of working is for B to say to A, "If you choose S2, then I will go and stomp all over W. I will ensure that W is not left in a pristine state."
So, I emphatically deny that desirism states that desire fulfillment has any kind of value. The only way that anything can have value - any S - is to the extent that P is true in S and there exists a desire that P. Even here, that value only motivates the person who has the desire.
Desire-fulfillment is not an exception to this principle. It is one of the things that has value only to the degree that P is true in a state of desire fulfillment, and there exists a desire that P.
(Note: S can also have instrumental value if S is able to help bring about T, and P is true in T, and there exists a desire that P.)
So, while other theorists may say that we are concerned with eudemonia, or pleasure, or happiness, or preference satisfaction, or the well-being of conscious creatures, or even desire fulfillment, I deny all of these possibilities.
What we are really interested in is making or keeping true the propositions P that are the objects of our "desires that P". That is what we are interested in. And because we are not governed by any single desire, it is not the case that there is any single "thing" that is the measure of all value that we can then hope to maximize.