From three facts, I will derive a moral ought.
In doing so, I will derive “ought” from “is”. However, I will not appeal to free will, intrinsic values, categorical imperatives, impartial observers, social contracts, or any similar fictitious entity. Nor will I sneak values in the back door cloaked in a value-laden term such as “flourishing” or “well-being”.
Specifically, assume that there is (1) a community of intentional agents where (2) each has an aversion to its own pain, and (3) each has a “reward system”.
An intentional agent is one that acts on beliefs and desires (intentional states).
A reward system is a system that processes rewards and punishments, including praise and condemnation, and uses it to create new desires and aversions, and to modify (redirect, strengthen, weaken) existing desires and aversions.
For purposes of this story, we do not need to know how these beings acquired this aversion to their own pain. Perhaps the creatures first evolved an ability to sense things by touch. Then they evolved a disposition to respond to certain sensations with MAKE IT STOP and MAKE SURE THAT NEVER HAPPENS AGAIN.
Similarly, the story of the origin of the reward system is irrelevant. The "aversion to pain" alone would have only allowed the agents to use means-ends reasoning to avoid future pain. The reward system may have evolved as a more efficient alternative. Without it, the agent would have to constantly remember that bees are a potential source of pain and one can avoid pain by avoiding bees. The reward system instead takes experience with bees and gives the individual an aversion to bees themselves. The individual then simply avoids bees. Avoiding pain is an unintended (but welcome) side effect of an aversion to bees.
However it came about, our intentional agents have an aversion to their own pain and a reward system.
In this initial situation, if an agent had to choose between (a) a scratch on its finger, and (b) the painless (for him and only him) destruction of the world, he has a reason to choose "not-a" (to avoid the scratch on his finger), but no reason to choose not-b (the painful-for-others destruction of the world). He would choose the destruction of the world over a scratch on his finger.
However, he also has a reason to cause others to have an aversion to causing pain to others. After all, “If I can cause everybody else to have an aversion to causing pain to others, then I can get them to avoid causing pain to me. My own aversion to my own pain gives me a reason to create this aversion in others.”
Our agent not only has a motive, he has a means. He can do this by praising and rewarding those who choose not to cause pain, and punishing and condemning those who cause pain.
These beings might have a primitive mind. “Condemning” might consist only in snarling, snapping, slapping, beating one’s chest, bearing one’s teeth. “Praising” would involve the sharing of food, grooming, play, and sex. This disposition to respond to those others who cause pain (where those others also have a reward system) might be instinctual, maybe even genetic, given the usefulness is preventing others from causing pain.
Or the creature might be a bit more intellectually sophisticated, and actually recognize the relationship between snarling and snapping at those who cause pain and their disposition to avoid causing pain to others, and thus be able to intentionally praise and condemn so as to produce this aversion.
More sophisticated beings with language may invent other ways to express approval and disapproval - to praise and condemn - such as using the terms "good" and "bad", "right" and "wrong", "hero" and "asshole".
But, let's not get ahead of ourselves. At this point we have a collection of individuals with an aversion to their own pain and a reward system. We have a collection of individuals with a reason to cause in others an aversion to causing pain, and a means to do so - by praising those who avoid causing pain and condemning those who do not.
In our hypothetical community, with our initial three conditions in place, we can show that it is also true that (4) everybody has reasons to promote, universally (in all other members of the community) an aversion to causing pain. This is not a new assumption, this is an implication of the assumptions we have already made.
We can soften that condition a little. We can allow that, perhaps, some creatures have no aversion to pain and, thus, do not have a reason to promote in others an aversion to causing pain. Yet, with this weakening, it is still an objective fact of this community that people, generally, have reasons to promote, universally, an aversion to causing pain by means of praise and condemnation, and little reason not to. This is a fact.
So, they adopt the practice of praising those who avoid causing pain to others, and condemning those who do not. As a consequence, they are somewhat effective at promoting a nearly universal aversion to causing pain to others.
We can further allow that complications and complexities mean that different people acquire this new aversion in different strengths. Perhaps it does not work on everybody, and some still lack this aversion to causing pain to others. However, this way of using praise and condemnation is generally effective. We are, after all, dealing with matters of cause and effect - which can be quite complex. These are not matters of logical entailment.
An individual who acquires this aversion to causing pain to others will react differently when confronted with a situation where he must choose between suffering pain himself and causing pain to others. If confronted with a situation where he must choose between enduring a great deal of pain himself or inflicting a small annoyance on others, he may still choose the small annoyance to others. However, when the choice is between a scratch on his finger and extreme suffering for others, he now has more and stronger reason to choose to endure the scratch on his finger. His aversion to the scratch is weaker than his newly learned aversion to causing pain to others in these circumstances.
If he does choose the suffering of others, people generally have reason to respond with snarling and snapping or other expressions of disapproval. If he chooses the scratch on his finger, they have reasons to respond with smiles and the sharing of food and other expressions of approval.
Now, let's deal with this "is/ought" problem.
If I tell you that you ought to do X, and you ask, "Why?", the only legitimate answer that makes any sense is for me to tell you of a reason to do X. If I cannot tell you of a reason to do X - a reason that actually exists - then you can reject my claim that you ought to do X.
That is to say, "ought to do X" implies "there is a reason for you to do X".
"There is a reason for you to do X" is an "is" statement. Deriving "is" from "is" is not at all problematic. Consequently, deriving "ought" from "is" is not problematic either.
Let's take a look at what David Hume presents as his argument against deriving "ought" from "is":
"For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it."
Hume is mistaken, this relation is not entirely different. The "ought" relation is an "is" relation that links an action to reasons for action that exist.
"Ought" is an ambiguous term. I would propose that we have many different meanings of "ought". These different "ought" meanings refer to different relationships between actions and reasons for action that exist.
The three principle meanings of "ought" are pro-tanto ought, practical ought, and moral ought. The "ought" I am talking about in the discussion above, "having a reason", is the "pro tanto ought". A pro tanto "ought to do X" is a statement that there is at least one reason - perhaps defeatable, and perhaps not very good reason, but a reason . . . to do X.
So, then, where do we get these "reasons to do X"?
Hume himself tells us where. On Hume's account, if there is a reason to do X, then there is a desire that would be served by doing X. Our agent's aversion to its own pain is his reason to avoid being in pain. This aversion to pain, where others have a reward system, is his reason to praise and condemn others so as to create in others an aversion to causing pain. Once a person has an aversion to causing pain, then they have a pro tanto reason to avoid causing pain. A person that has both a reason to avoid his own pain and a reason to avoid causing pain to others, forced to make a choice between enduring some pain himself or causing some pain to others, will need to look at whether and to what degree each aversion is served by the different options - weighing the service to each aversion against the service to the other, and judging which action best serves those aversions.
We need to carefully distinguish between the concepts, "Agent has a reason to do X" and "There is a reason for Agent to do X". These two statements do not mean the same thing. Yet, moral philosophers often use them interchangeably, generating a great deal of confusion.
The two phrases are different. Consider the phrase, "Agent has $10", and "There is $10." Certainly it is possible for it be the case that "there is $10" to be true, while "Agent has $10" is false. This is the case if somebody other than agent has $10.
Alph and Bet each have an aversion to their own pain. In our initial situation, before any praising or condemning has taken place, Alph has a reason not to cause pain to himself. Alph has no reason to avoid causing pain to Bet. However, there is a reason for Alph not to cause pain to Bet. Bet has that reason. While the reason that Bet alone has will not motivate Alph in any way to avoid causing Bet pain, it will motivate Bet to praise and condemn Alph, thereby creating in Alph a new reason - an aversion to causing pain to others. Once this new reason is created, then Alph will have a reason not to cause pain to Bet.
Now, let us go back to the definition of "ought". To say, "You ought to do X" is to say "There is a reason for you to do X." (Recall that I am talking about pro tanto ought at this point). "There is a reason or you to do X" does not mean "You have a reason to do X" - so the "ought" might not be in the least bit motivating. That is, it may not motivate you to do X. If the phrase "there is a reason" is referring to reasons other people have, then it is referring to reasons to praise you if you do X, or condemn you if you do not do X.
Now, we have an "ought" that refers to what people have reason to praise and condemn.
In other words, while there is an "ought" that refers to what an agent has a reason to do, there is another "ought" that refers to what to what people generally have reason to praise or condemn an agent for doing (or not doing). Another way to say this is that we have an "ought" that refers, not to what an agent HAS a reason to do, but to what an agent SHOULD HAVE a reason to do.
Returning to our original situation, where agents have an aversion to their own pain and a reward system, an agent has no reason to refrain from causing pain to others. However, the agent ought to have such an aversion. By this, I mean that people generally ought to praise those who refrain from causing pain to others and condemning those who do not. Which means that people generally have reasons to praise those who refrain from causing pain and those who do not. Which is true because people generally have desires that would be served by praising those who refrain from causing pain and condemning those who do not. Those desires that would be served are their own aversions to pain.
Now, finally, to say that it is wrong to cause pain to others is to say that a person who had the desires she ought to have (including an aversion to causing pain to others) would not have caused pain to others in those circumstances. It is wrong, and it is a type of wrong that people generally have reason to respond to with condemnation.
That's it then. From (1) a group of intentional agents who (2) each has an aversion to their own pain and (3) a reward system, we get "It is wrong to cause pain to others". There are reasons to praise those who refrain from causing pain, and to condemn (even to punish) those who do not. And there are few (if any) reasons not to do so.
There is no god, no intrinsic values, no categorical imperatives, no social contract, no free will, no impartial observers, no veil of ignorance. There is also no "sneaking values in the back door" by using value-laden terms such as "flourishing" or "well-being". There are simply relationships between states of affairs and desires, where desires provide reasons for action.
No comments:
Post a Comment