Thursday, January 17, 2019

Revising Gibbard's Planned

This posting is a reworking of an original post where I commented on the metaethical theory of Allan Gibbard. It is such a significant rewriting, that I am opting to post it separately.

Allan Gibbard seeks to explain “ought” in terms of the realization or expression of a plan, then tries to use this to explain how disputes can be consistent with a non-cognitivist (or quasi-realist) account of value.

He has us consider Jack, who has an interest in drawing water from a well at the top of a hill. However, there is a chance of falling and breaking his crown. Ought he to try for the water? In Gibbard’s discussion, one observer (Agent1) says that Jack should - the need for the water makes it worth the risk. Another observer (Agent2) says that Jack should not - better safe than sorry. They disagree about something. Gibbard tells us that they disagree about plans or, more precisely, at how fetching the water will fit into the plan. Gibbard tells us that this is the type of disagreement we find in the case of moral disagreement.

I see plans as the paradigm of means-ends reasoning. If you want to make an apple pie, follow a recipe - a plan. Plans provide a course of action that, when followed, realize some end or goal. The value of the plan depends importantly on the value of the end or goal. However, plans do not give ends their value - that is a separate question. The apple pie recipe tells me how to make an apple pie, but it does not give me a reason to make it. A plan to win the state fair blue ribbon for best apple pie may give me a reason to make an apple pie, but not a reason to seek the ribbon. In all cases, there seems to be something just out of the reach of the plan's "ought" - the reason for the plan itself.

Let us admit that we are creatures that plan. Jack creates a simulation. In this simulation he places a character, Sim Jack. Sim Jack has his beliefs and his goals or ends. Sim Jack’s world also runs by the same laws of nature as the real world. We may imagine Jack running these simulations, using different action-options, and trying to determine which one will realize more of his ends. He uses this to create a plan. Please note that this takes a lot of time and effort, so Jack will typically use shortcuts instead that are less reliable but take less time and energy.

Agent1 and Agent2 also have the ability to construct their own version of Sim Jack. For Agent1, for example, Sim Jack is Sim Agent1 with Jack’s beliefs and Jack’s ends. Agent2 can also create a Sim Jack. Once constructed, they can each run Sim Jack through their iPhone simulations testing different actions. Our Agents can test their simulations by using historical data and determining if the simulation correctly predicts historical results. In doing this, note that one or both simulations can be mistaken - the agent's can be wrong. In other words, there are points here of genuine disagreement.

Gibbard may object to the very idea of these simulations predicting Jack's behavior. Gibbard seems to suggest that Jack has a capacity to make choices that go outside of the laws of nature. He states that science has always seemed to fall short at capturing what is "exceptional" about humans, and this seems to include something that goes into our "plans". He also wrote, "Moore thought that moral facts somehow lie outside the world that empirical science can study. We can broaden this to a claim about the space of reasons as a whole, which, we can say, lies outside the space of causes." We may be forced to go in that direction, but I do not think that should be our first option. We should at least look at what we can do without taking such extreme steps.

Assume Agent1 runs Sim Jack using an action that Jack did not think of and discovers it will realize more of Jack’s own ends. Is there an English sentence he can use to report this to Jack? I would recommend something like, “Jack, you should try this.” In saying this, Agent1 is reporting a fact about the relationship between “this” action and Jack’s ends that is true (or not) regardless of what Agent1 believes or endorses. “Should” is being used to report relations between actions and ends, though not necessarily the ends of the speaker.

Once our Agents have built and tested their simulations, they can then see what happens when they adjust certain variables. The variables I am interested in are: (1) the world, (2) Jack’s beliefs, and (3) Jack’s ends.

Gibbard considers why we would contemplate plans from another person’s perspective and appears to settle on the answer that it is a kind of useful playing - like reading fiction - helpful in improving our ability to make our own plans. These simulations are not just useful games. We use them to predict how others will act. We have reasons to care about whether others will interfere with or help realize our own ends. We can also use them to determine how we may influence whether their action will realize or thwart our ends.

Examples that involve altering the world variables include applying physical restraints (locks, imprisonment) and offering incentives or threatening punishments. Agents may also be interested in the case where the world actually corresponds to Jack’s beliefs. Jack is assuming this is the case when he runs his own simulations, but our Agents may recognize that some of Jack’s beliefs are mistaken. They can still run their own versions of Sim Jack under the assumption Jack is right to see the results.

Examples that involve changing belief variables can include, “Which action would best realize ends if beliefs were true in the sense that they accurately describe the world?” Here, instead of changing world variables to agree with Jack’s beliefs, our hypothetical Agents change the belief variables to match the world. This may be mistaken for the “informed desire” approach. However, I side with Hume in holding that a change in beliefs do not imply a change in ends. Our ends come from evolution/biology (aversion to pain, desire for sex, hunger, thirst, concern for one's offspring, comfort), activation of the mesolimbic pathway, drugs, and physical change - e.g., having a railroad tamping rod driven through one's prefrontal cortex in a railway construction accident. Agents may also have an interest in running Sim Jack through iterations where they change the belief variables to beliefs that are reasonable given Jack’s evidence. An epistemologist can help us in this task.

Ends may be immune to reason. However, they are not immune to praise and condemnation (and other forms of reward and punishment). Consequently, our Agents may want to run Sim-Jack iterations while adjusting Jack’s ends to those that are within people’s power to bring about using praise and condemnation. By comparing the results to the ends of other people, these iterations tell us something about the ends that people have reason to promote using praise and condemnation. These simulations would also need to include relevant real-world facts; for example, that ends persist and will influence a large number of actions. A particular end (e.g., to keep promises) might produce some bad consequences in a given instance and still be an end that people generally have reason to promote through praise (of those who keep promises) and condemnation (of those who do not).

People who run simulations under these terms can have genuine disputes on a number of grounds. For example, they may disagree about the ends that people generally have reasons to promote through praise and condemnation. They may also disagree over what a person with such ends would do in a given circumstance. These are genuine disputes over matters of fact – not the pseudo-disputes that Gibbard generates.

Gibbard might still object to this on the grounds that it implies motivational externalism. Simulations that agents run based on ends other than their own only contingently reveal facts that will motivate those agents. However, that issue will have to be addressed elsewhere.

No comments: