A member of the studio audience, Mike Gage, has asked me to address my reasons for objecting to social contract and impartial observer moral theories.
You said in one podcast interview that it's problematic because there is no impartial observer or social contract. I argue, though, that it can be grounded in the propositions that would be true if there were such things.
I am going to respond to this objection in two forms - in the form that it appears in Gage's comment, and the form that it appears in Gage's post that he links to in the comment.
I will take the current form first.
Any argument that has premises that "would be true if there were such things" can only support conclusions that "would be true if there were such things". It cannot support premises that are true - unless "if there were such things" is changed to "such things are real."
"I am not an only child" would be true in a universe in which I had a brother or a sister. But that does not make it true in which such things do not exist. In the real world, we still have to look at whether I do or do not, in fact, have any brothers or sisters.
This is my main objection to impartial observer and social contract theories. They start with admittedly false premises. "There is an impartial observer," and "There is a social contract" are false.
Now, on to the second form of the argument. Mike provided me with a link to his post on this matter where he provides a different argument.
Under a contractarian framework, I think we get truth value from reference to a proposition. For example, to say we have reasons to prevent and condemn action x is to say that the following proposition is true: "A perfectly rational being in the original position would have reasons to prevent and condemn action x." What we are really grounding our morality in is rationality itself and we can point to these propositional truths in order to be describing an objectively true fact of the matter.
(See Atheism and Evil Part 2.)
The "Original Position" here is behind John Rawls' veil of ignorance, in which the agent is unaware of the position he will hold in the society whose rules he is evaluating.
This is a different argument, and it invites us to ask the question, "What reasons does this perfectly rational agent have to prevent or condemn action x?"
What is her answer?
"I condemn action X because a perfectly rational agent would condemn action X, and I am a perfectly rational agent, and I condemn action X."
That's not a very satisfying answer.
Look at it this way:
To say that the squares of the two sides is equal to the square of the hypotenuse is to say that the following proposition is true: "A perfectly rational being in the original position would have reasons to believe that for a right triangle the squares of the two sides is equal to the square of the hypotenuse. What we are really grounding our math in is rationality itself and we can point to these propositional truths in order to be describing an objectively true fact of the matter.
Now let's ask this perfectly rational mathematician why she believes the squares of the two sides is equal to the square of the hypotenuse.
I trust that we would not be satisfied with the answer: "I believe it because a perfectly rational agent would believe it and I am a perfectly rational agent, and I believe it." We would want her to provide her reasons for believing it. Once she does, we can then adopt those reasons as our own, and do away with the perfectly rational mathematician. She was merely a place holder for whatever reasons she would offer in support of her belief.
So, the perfectly rational agent in Gage's example is also merely a placeholder for whatever reasons she should give for condemning X - reasons that, once we knew, we could adopt as our own.
However, in the case of condemning X, we run into another problem. Is it the case that the reasons she has for condemning X are necessarily reasons we can adopt as our own?
She may make herself a peanut butter sandwich because she likes peanut butter sandwiches, or refuse a sandwich because she is allergic to peanuts. The fact that she takes a particular action does not justify the conclusion that I should act the same way - not if the reasons she uses are not reasons that I should adopt as my own. The fact that she likes peanut butter sandwiches does not imply that I should adopt a liking for peanut butter sandwiches as my own.
The "veil of ignorance" may be an attempt to deal with this. It makes each decision-maker ignorant of their own desires so that they cannot use them in making a decision. But they are supposed to be aware of the fact that such preferences exist and she might have them.
However, agents only act on the desires they have - not the desires that they know to exist. I may know of your aversion to pain, but whether that will motivate me to avoid states of affairs in which you are in pain or cause them depends on whether I have a current aversion to you being in pain or a desire to see you suffer. Without desires of my own on which to base a decision, I am indifferent to your pain. So, a perfectly rational agent ignorant of his own desires would choose nothing.
However, a more important problem is unjustified logical leap from what an imaginary agent in an imaginary situation would do to what we should do.
It might be perfectly rational for perfectly rational agents to adopt a particular set of rules behind a veil of ignorance. However, once the veil is lifted, and a flood of new information becomes available, the perfectly rational agent does not simply ignore this information. She uses it to reassess and revise the conclusions she drew while ignorant, and to adopt new conclusions based on new and better information. What is rational in a state of ignorance is often quite irrational in a state of having information.
The fact that it would be rational for me to leave the building in a state in which the fire alarm is going off does not imply that it is rational for me to leave the building at this moment, when the fire alarm us not going off. Even if I was a perfectly rational being with good reason to leave the building when the fire alarm goes off, this does not imply that everybody should leave the building at this moment. These types of inferences just do not have any logical validity.
So, not only is it the case that the perfectly rational agent is a mere placeholder for the reasons she has for believing something, in the case of an action (and condemnation is an action), the reasons she has are not necessary reasons that we have any reason to adopt as our own. And actions that an imaginary agent would take in an imaginary world in a state of ignorance does not imply anything about the actions real agents should take in the real world in a state of non-ignorance.
For these reasons, I reject social contract theory.
Now, I want to stress, there are moral facts. The failure of social contract theory does not imply a failure of moral realism. It's just that this particular route to that destination has far too many logical roadblocks. We have to look for another route.
An objective morality requires premises that are true in the real world, and does not try to draw inferences from what is imaginary (perfectly rational agents behind a veil of ignorance) to what is real.
Briefly - the conclusions that I would defend say that we really need to ask our hypothetical perfectly rational person a different question. Without assuming any ignorance, ask her, "What malleable desires do people generally have the most and strongest reason to promote using social forces such as praise and condemnation? And what actions would a person with those desires perform?" When we ask the hypothetical perfectly rational and fully informed agent this question, the agent is, in fact, a mere place holder for a set of objective facts. And the reasons she gives for whatever answer she gives us is made up entirely of reasons we can then adopt as our own reasons for adopting the same conclusions. This meets the criteria for an objective morality.