Choosing is about making plans, according to Allan Gibbard.
G.E. Moore's Mistake
As has been the case with non-cognivist, Gibbard starts with Moore's mistake. He did not see it as a mistake, of course, that is my characterization. However, there are few mistakes in philosophy that have gotten so many people chasing off in the wrong direction for things that are not there.
The mistake is to think that Moore's "open question argument" supports the thesis that morality must be outside of science.
Let me quickly interrupt this posting to repeat my standard best argument for thinking that morality must be a part of the natural world. It has the ability to change the direction and velocity of carbon atoms. When human beings act, carbon atoms move in a particular direction at a particular time. If a "reason" has anything to do with that action, then that "reason" must have the ability to cause carbon atoms to move in a different direction or at a different velocity. It must have the ability to cause a person to tell the truth, repay a debt, punish a villain, or aid a person in need. If it has no ability to change what is happening in the world, then why even think that there is such a thing? If, on the other hand, it can influence behavior in the real world - even if we are only talking about the power to engineer a particular thought such as, "I have a reason to do X", then it must be something in the material world.
Some may accuse me of scientism. I cannot answer the charge, because I have never understood what "scientism" is. I will grant that, if all else has been tried and there is still reason to believe that there are causes of motion that are outside of scientific study, then I would agree that we must go with their existence and transcend scientific study. But if we do not need to go there, we have good reason not to.
With those caveats in mind, let us look at what Gibbard says of Moore's open question argument. The open question is to be used on any theory that tries to reduce value to a natural property. It always generates an open question. If somebody says, "goodness is pleasure", Moore would respond with the claim that goodness cannot be pleasure because, if somebody were to ask, "X is pleasurable, but is it good?" would be an open question. It is not clearly obvious in the way, "X is good, but is it good?" would be obvious.
Gibbard follows other non-cognitivists in thinking that this fact gives us reason to think that value properties cannot be reduced to physical properties (in spite of the fact that they have the power to cause physical substance to move). Gibbard attributes to Moore, "Moore thought that moral facts somehow lie outside the world that empirical science can study. We can broaden this to a claim about the space of reasons as a whole, which, we can say, lies outside of the space of causes."
As another aside, desirism does not have this problem. Desirism says, "Good" = "Is such as to fulfill the desires in question." When somebody such as Moore says, "X is such as to fulfill the desires in question, but is it good?" is an open question, desirism says, "Of course it is an open question. We don't know if the "desires in question" in the right-hand side are the same "desires in question" in the left-hand side. It is quite possible that something can be "such as to fulfill the desires of the agent" but not "such as to fulfill the desires that people generally have reason to promote universally?" Until we can be certain that the "desires in question" are the same on both sides, it is quite possible that something can be "such as to fulfill the desires in question on the left hand side" but not "such as to fulfill the desires in question on the right hand side."
In a sense, expressivism does not have this problem either. If moral claims are the expression of an attitude, the attitude is the cause of the expression and, in addition, may be said to be the cause of the action - the agent doing that which is judged to be good or ought to be done. But, then, this answer needs to ask the question of whether (or why) this starts with Moore. If reasons are tied to attitudes, and attitudes are natural entities, then reasons are not non-natural entities. Moore seems to be a poor place to start regardless of the road one is travelling down.
Disagreement
Gibbard's focus is on how to handle disagreement. He uses the example of Jack, going up the hill to fetch a pail of water, falling down, and breaking his crown. He looks at the question of whether Jack ought to have gone up the hill. He imagines two people coming up with different answers, and asks how they can be in disagreement. Even with both of our investigators having access to all of the relevant facts, it is still possible for one to come to the conclusion that going up the hill was worth the risk, and the other to deny it. This is the sense in which the "ought" is outside of or beyond scientific investigation. This is the sense, according to Gibbard, in which I can think that Jack still should have gone up the hill, and you might think he should not have.
He denies that A.J. Ayer or C.L. Stevenson has a way of accounting for disagreement. If I say Jack ought to go up the hill, then I am expressing an attitude in favor of Jack going up the hill. If you disagree with me, then you must be saying that I am not in favor of Jack going up the hill. Yet, that does not seem to capture your claim that Jack ought not go up the hill.
I wish to offer a different account of this disagreement.
You and I both run a "Jack simulation" in our brains. We input Jack's beliefs and his ends (desires) and we run the simulation to determine if simulation-Jack goes up the hill. We can then test our simulation for accuracy by measuring its results against the real-life results of Jack's behavior. If our Jack simulations come up with different answers, then we disagree. But there is a right answer, made right by Jack's actions. If both of our Jack simulations are accurate, then we will both predict that Jack will go up the hill, or that he will not, and Jack will do what we predict he will do.
Predicting what Jack will do is not the same as making a claim about what Jack should do. Towards this end, we can run the simulation and see if the actions will realize Jacks' goals or ends. Since the laws of nature are the same within both of our simulations, we should still get the same results. If the results are different, one of our simulations needs adjusting. If Jacks ends are not realized, then Jack ought not to go up the hill. If we replace Jack's beliefs with true beliefs, Jack himself will agree that he has ought not to go up the hill - not if he is just going to fall and break his crown. The important point here is that there is still one answer and whoever gets the wrong answer needs to adjust their Jack simulation. There is, then, an answer to what Jack should do, given his ends, and that is whatever would realize his goal in a universe where he had true and accurate beliefs about the relevant facts.
This is not an informed desire account. An informed desire account assumes that if Jack had more accurate beliefs, he would have different (better informed) desires. Here, we are denying this. Giving Jack more accurate beliefs does not change his desires at all. It simply allows him to make plans that would allow him to more successfully reach those desires. What Jack "should do" in this situation is what a person with accurate beliefs but the same ends or goals would do - what will actually, in fact, realize those ends.
We could also run the Jack simulation by substituting different ends. However, in this case, we are no longer asking what Jack should do. We are asking what somebody else with different ends should do if that person were in Jack's place. A common example would be to ask what we would do if we were in Jack's place. In this case, we remove Jack's ends and insert our own ends in their place. If you and I both do this, then we should not be surprised to discover that Jack's situation plus my ends yields different results from Jack's situation plus your ends. This does not count as disagreement.
We can also run the Jack simulation using ends other than Jack's, yours, or my actual ends. We can evaluate the ends themselves and determine if they have any merit. There is a sense in which our ends are under our voluntary control. We cannot decide, at a given moment, to have a particular end or not. However, we can still decide whether we ought to cultivate (or not) certain traits. This is the same type of control that we have over our weight. I cannot decide, on a moment's notice, to weigh 10 pounds less than I do now. However, this inability to instantly choose to weigh less does not prevent me from making sense of the claim, "I ought to lose a few pounds" and to create a plan to realize that end. Similarly, we can wonder whether we "ought to desire that P" and see if we can make a plan to cultivate a desire that P.
In one type of case, this is easy to do. For example, a person can choose whether or not to have a desire to smoke. Somebody wishing to acquire a desire to smoke can typically do so by smoking and allowing the nicotine to impact his brain in such a way so as to create a strong desire to smoke. He can choose to avoid having such a desire by not smoking. Other desires are not so easily acquired or extinguished, but there are still things an agent can do to acquire, strengthen, weaken, or extinguish many of them. A person who answers the question, "Ought I to desire to smoke" with "No" can take steps to help to ensure that she does not acquire a desire to smoke.
We can ask similar questions about, for example, a desire to deal honestly with others, an aversion to taking property without consent, a desire to help those in desperate need, and an aversion to breaking promises. We can count these among the desires that agents ought to have. Here, we should consider not only the ends that Jack has reason to cultivate in himself, but the ends that others have their own reasons to cultivate in Jack (and everybody else, perhaps) such as the aversion to breaking promises.
Now, we have the option of running the Jack simulation where - in the relevant cases - we substitute Jack's ends with the ends that Jack has reason to cultivate in himself and that people generally have reason to cultivate universally. What Jack would do in that case, given his beliefs, gives us another answer to the question of what Jack should do. We can also ask about what Jack, with good desires, lacking bad desires, and having true and relevant beliefs would have done.
When we ask what the Jack with good desires and lacking bad desires would do, we may disagree because we may disagree on what the good and bad desires are. This is still a case of genuine disagreement. We are asking if this is a desire that people generally have many and strong reasons to promote universally, and we may find ourselves having a difference of opinion on that matter.
What I have done here is created a whole set of "should" or "ought" statements. I see no problem with that. "Ought" is am ambiguous term. We get these different answers given the different inputs. Any time two people plug the same belief and desire inputs into the Jack simulation, they should get the same response. If they do not, one of them is mistaken. If, instead, they each plug in different belief and desire inputs, they should not be surprised if they get different answers. That is not a sign of disagreement. Disagreement comes from getting different results using the same beliefs and desires, or in determining which beliefs and desires meet a particular selection criteria.
In this way, genuine dispute is possible.
Saturday, January 12, 2019
Metaethics 0006: Allan Gibbard - Plans
Posted by Alonzo Fyfe at 3:31 PM
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment