363 days until I will be attending my first graduate school class.
I finally managed to get to a presentation at the University of Colorado: Prof. Neil Sinhababu (University of Singapore), “The Epistemic Argument for Hedonism.”
In addition to the speaker, there was 12 to 14 other attendees – philosophers and graduate students from the Department of Philosophy. They all seemed to know each other. I wonder what they thought of this stranger in their midst. I did not get a chance to impress them with my deep, thought provoking Socratic questioning. I did not say a word – as is my custom. I had a question ready to ask. However, before I got the nerve to ask it, one of the professors asked the same question – and most of the others judged it to be a good criticism.
In all, I did not feel at all as if I was behind the curve.
I ended the night by sending an email to Dr. Sinhabubu with my comments on his presentation – which I will also post here.
In fact, the first half of his talk concerned an issue I was questioned about a few posts back.
I do not like or trust the use of moral intuitions in moral philosophy.
Moral intuitions merely describe the prejudices of a culture. Thousands of years of slavery did not bother the sensibilities of whole cultures. Many morally sensitive people failed to intuit the status of women as persons. Moral intuitions still ignore the suffering of animals.
I have a specific event that I recall that sparked my skepticism of moral sense and moral intuitions. A friend of mine and I were collecting signatures for a ballot initiative in front of a grocery store. Be both liked political discussion, and got into a discussion with what may be politely referred to as a “white nationalist”. He was against interracial marriage – though he denied being a racist. When he saw a mixed race couple leaving the grocery store and said, "See, that's the kind of thing I am talking about."
I could tell that he could just "see" the wrongness in that relationship - it was as obvious to him as the color of the man's shirt. He drew the lesson from this – what was actually an emotional reaction – that he was reacting to wrongness and that any properly functioning person looking at the same site would see the same wrongness. He expected us to just see the wrongness as well.
What he saw was a projection of his own prejudices and emotions. He did not see wrongness in the interracial relationship because there was no wrongness to see.
This is true of moral intuitions and moral perception generally – they are nothing more than the projection of one’s own emotional reactions mistaken to be a wrongness in the world to be perceived.
A few posts back I used these arguments against the claim that we have an evolved “moral grammar” – a sense of right and wrong that evolution has put into our brains.
I objected that evolution certainly has the power to give us likes and dislikes – we like fattening food, sex, and care for our young, and dislike the smell of a rotting corpse, pain, and temperatures outside of a comfortable range. This is enough to guide our behavior towards evolutionary fitness. To claim that we have a moral sense requires something much more than this, and this something more is both evolutionarily unnecessary and philosophically unjustifiable.
In short, perhaps, evolution can explain some human altruism, but it cannot be used to generate the conclusion that altruism is good. Evolution can also explains some of our disposition towards tribal prejudices, rape, and gluttony. However, to find the goodness and badness of these things, we have to look outside of evolution.
In response, I received an anonymous comment concerning the value of moral intuitions.
From what I can tell, the majority of philosophers begin with some "common sense" moral intuitions, and try to develop a theory which accounts for and explains our most plausible or obvious moral intuitions whilst not conflicting with other obvious intuitions. In other words, they offer an explanation of "what makes X right/wrong", and argue whether this explanation is correct or incorrect based on conceptual analysis with respect to some "obviously justified moral beliefs".
I agree that this is what a majority of philosophers do - but they are wrong to do so.
This disposition among philosophers has only increased with John Rawls' A Theory of Justice, where he described a system of moral justification he called "reflective equilibrium." This is a type of coherentist rationality where a thinker goes from specific moral judgments to universal principles and from them back to specific judgments measured by the use of those principles. According to this system, we are to tweak the principles and the specific judgments until we have a coherent whole.
So, what is wrong with this?
The general problem is that we cannot limit this coherence to moral principles and specific moral judgments. There is a broader set of facts that this must also cohere to.
Among these facts is the fact already mentioned - that a great many people at different times, trusting to moral intuitions, get substantially different results. Other facts concern the role of beliefs and desires on intentional behavior and the non-existence of a god or any type of objective intrinsic prescriptivity. When we examine moral intuitions in the light of this broader coherence, we see that they are, as I said above, nothing more than the prejudices of a given culture or individual.
In The epistemic argument for hedonism Neil Sinhababu devotes the first part of his paper to what he calls an argument from disagreement for moral skepticism. It is a very well laid out argument.
To be accurate, Sinhababu is not a moral skeptic. He is a moral realist who, like Rene Descartes, begins by casting doubt on our existing moral beliefs. This includes an argument that we cannot trust our moral intuitions.
His argument from disagreement must be distinguished from John Mackie’s in Ethics: Inventing Right and Wrong. Mackie argued that because there is moral disagreement, there are no objective moral properties. This argument is problematic in the same way that, “There is disagreement concerning the age of the Earth so there is no objective right answer” would be problematic.
Sinhababu argues that because there is widespread disagreement, we cannot trust our beliefs. When people disagree on things, then some of them must be wrong. If there is a great deal of disagreement, then there is a great deal of error. If we look across cultures and across time, we find a great deal of moral disagreement. Consequently, there must be a great deal of moral error.
To show the unreliability of our current systems, he uses the same types of points that I made above - the bad moral intuitions people have had.
Human history offers similarly striking examples of disagreement on a variety of topics. These include sexual morality; the treatment of animals; the treatment of other ethnicities, families, and social classes; the consumption of intoxicating substances; whether and how one may take vengeance; slavery; whether public celebrations are acceptable; and gender roles. Moral obligations to commit genocide were accepted not only by some 20th century Germans, but by much of the ancient world, including the culture that gave us the Old Testament. One can only view the human past and much of the present with horror at the depth of human moral error and the harm that has resulted.
By analogy, he argues from the fact that empirical observation supports the claim that each person has an appendix to the conclusion that he, too, has an appendix. Similarly, the moral disagreements above show that people generally have unreliable procedures for justifying moral beliefs, from this, each person should conclude, "I, too, probably have an unreliable method for justifying moral beliefs."
The world is filled with moral error and, unless we have some particularly good reason to believe otherwise, each of us should accept the possibility that we could be the ones who are wrong.
This is a question I often ask myself. "How can you possibly think that you have the right answer when there are people smarter than you who disagree?"
This is what started me on my quest to find a more reliable foundation for moral claims.
However, in finding that foundation, moral intuitions are not to be trusted. Given that so many moral intuitions are in error – and my own may be among them – to rely on moral intuitions is to rely on beliefs where many of them are probably in error. That does not serve as a good foundation.
Can we have a reliable way of discovering moral facts?
Well, science seems to be reliable. I suggest we try looking there.