My point, rather, is that evolutionary psychology contributes far more to our understanding of morality than the mere fact that we are intentional agents with reward systems.
David Pinsof made this claim in response to my last post.
It is true, as I turns out - but not in the way he describes it.
My original claim was that multiple intentional agents with malleable ends is all that is required for morality. Wherever we find these properties, we will find (with a few highly technical exceptions) moral facts.
However, the precise moral facts we find there depends on the desires of the agents that make up the community, which desires can be changed, he far they an be changed, the types of activity that cause change, the costs of that activity, and the like.
In other words, if you tell me that there is a planet X with intentional agents having at least some malleable ends, I am not going to claim the ability to deduce every right and duty to be found on Planet X. In fact, I do not think I can name one (non question-begging) right or duty. Before I answer what duties actually exist, I will need to know some things about the creatures on Planet X.
Evolutionary psychology will have a lot to say that will be relevant to the moral facts that do exist on planet X. Evolutionary psychology will provide us with a lot of facts useful in determining what the moral facts are.
However, nowhere in deducing the rights and duties on Planet X will I claim that there exists a particular right or duty merely because the people have evolved a disposition to approve of it.
Perhaps they evolved a disposition to kill any being that develops green fur. There may be an evolutionary reason for this fact (green fur was indicative of a parasitic infection and communities that evolved a disposition to kill those who acquired it were saved the ravages of the disease). But I am not going to go from this to, "Oh, then on Planet X, people who acquire green fur deserve to die."
Here on Earth, we have a disposition towards a strong aversion to pain. Also, some desires are malleable - we mold the desires of others in such a way that they are less disposed to engage in behaviors that result in experiences of pain in others. The aversion to pain provides the motivating reason to mold desires in this way, and the facts of the "reward system" where desires are molded explains how to go about it.
The evolutionary psychologist can fill in a lot of facts about the pain system and the reward system - relevant in molding desires in useful ways.
Nobody is going to fully understand the pain system without understanding the fact that we are evolved beings. Nor are we going to have a complete understanding of the methods available for promoting an aversion to others in activities resulting in pain experiences. In this area, an understanding of evolutionary psychology is useful in determining the specifics of how to go about promoting these aversions to activities contributing to pain.
However, even animals can acquire enough of an understanding to note that growling and snapping at those whose behavior inflicts pain, and rewarding those who cause no pain such things as food, sex, grooming, and protection, will make pain experiences less common.
Yet, when the evolutionary psychologist claims to be able to read moral content directly from evolved dispositions - to explain "People with quality Q deserve to die" entirely from "We have evolved a disposition to kill those with quality Q" (or any other inference in this same family), they have overstepped their bounds.
You cannot read moral content directly from an evolved disposition.
People do have a habit of using their own likes and dislikes as a measure of moral quality. That is to say, people have a habit of going from, "I have a feeling of disapproval towards X" or "I feel justified in inflicting harm on those who do X" to "X is wrong." However, the mistake the evolutionary psychologist makes is in taking this common mistake and building a whole subfield of study on the assumption that it is legitimate.
Regardless of how often we see people make this leap, it still amounts to nothing more than, "I (evolved a disposition to) like hurting people with property P; therefore they deserve to be harmed." Nothing the evolutionary psychologist tells us will ever successfully complete this logical jump. Everything we find in evolutionary psychology that includes this inference is garbage.
And we find a lot of it.
Evolutionary psychologists might be able to explain the sentiment that people use on the near side of this leap, but they will never be able to justify the step that goes to the far end of this leap.
Consequently, the evolutionary psychologist cannot "account for morality". It can account for certain evolved likes and dislikes that will, in turn, be morally relevant. But the idea of reading moral content directly from evolved preferences is nonsense.
Not only is it nonsense, but - like most religion - it is very dangerous nonsense that will ultimately cause a great deal of unnecessary harm unless it is checked.
4 comments:
Hello
I would like to ask a question of both Alonzo Fyfe and David Pinsof, if they are both reading this.
What problem is morality designed to solve?
This may or may not be a badly formed question. Please feel to reword it, even if you wish to recast the question without recourse to any moral terminology.
I am more curious in your answers, how they overlap, how they may differ, and whether the difference (if any) points to differences in meaning or something more substantial.
@Adil Zeshan
That is a huge, complicated, and currently unsettled question. Here are a few answers that I find particularly compelling but are by no means the only ones proposed:
1) Morality did not evolve to solve one particular problem, but rather evolved to solve multiple problems that are endemic to social life, including reciprocity, collective action, status hierarchy, intergroup conflict, and pathogen transmission. This theory is defended by the psychologist Jonathan Haidt and is fleshed out in his latest book “The Righteous Mind.”
2) Morality evolved to solve the problem of exploitation – that is, the tendency for others to impose costs oneself or on those with whom one's fitness is interdependent. This view has been fleshed out here: http://www.cep.ucsb.edu/papers/CriminalJustice2010.pdf
3) Morality evolved to solve the problem of “being chosen and recruited in mutually beneficial cooperative interactions.” This view is fleshed out here: http://cholbrook01.bol.ucla.edu/BBS_Baumard_Commentary_Fessler_Holbrook_2013.pdf
Of course, there is substantial overlap between these views, and they are not mutually exclusive. The evolution of moral cognition is a huge and thriving field of research, and behavioral scientists have only begun to scratch the surface. Many new insights will undoubtedly emerge in the coming decades.
@Alonzo Fyfe
I do not disagree with anything in this post. My only beef is with your claim that “evolution only accounts for the fact that we are intentional agents with reward systems.” I think this claim is flatly refuted by empirical evidence. Here is a sampler platter of things evolution can account for, other than the fact that we are intentional agents with reward systems.
Evolution accounts for guilt, sympathy, anger, trust, and gratitude:
http://www.cdnresearch.net/pubs/others/trivers_1971_recip.pdf
Evolution accounts for revenge and forgiveness:
https://www.dropbox.com/s/n5i80tt57crwf2f/2013%20McCullough%20et%20al%20BBS%20target.pdf
Evolution accounts for our concern for our own and others' moral reputations:
http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/nature05c.pdf
Evolution accounts for why we sometimes prefer to rehabilitate vs. punish criminals:
http://www.cep.ucsb.edu/papers/2012_PetersenEtal_PunishOrRepair.pdf
Evolution accounts for why some people morally condemn drugs:
http://rspb.royalsocietypublishing.org/content/early/2010/06/12/rspb.2010.0608.full.pdf
Evolution accounts for stigmatization and ostracism:
http://courses.washington.edu/pbafhall/514/514%20Readings/evolutionaryoriginsofstigma.pdf
Evolution accounts for fairness and impartiality: http://cholbrook01.bol.ucla.edu/BBS_Baumard_Commentary_Fessler_Holbrook_2013.pdf
Evolution accounts for the way we think about freeriders in group efforts:
http://www.psychologytoday.com/blog/darwin-eternity/201301/punish-the-shirkers-especially-the-low-status-ones
Evolution accounts for differences in moral judgments of wealth redistribution:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1798773
Evolution accounts for moral opposition to incest:
http://rspb.royalsocietypublishing.org/content/270/1517/819.full.pdf
Any descriptive theory of morality, including desirism, ought to be consistent with the empirical evidence just cited, along with all the other existing empirical evidence in the field of moral cognition. If desirism is to be a descriptive theory of morality, as opposed to just a normative theory of morality, then it ought to be able to explain why factors irrelevant to desire fulfillment (e.g. upper body strength, duration of coresidence with a sibling) appear to have such a strong effect on our moral judgments. I think evolutionary psychology can account for these oddities. I am skeptical that desirism can.
David Pinsof
I do not disagree with anything in this post. My only beef is with your claim that “evolution only accounts for the fact that we are intentional agents with reward systems.”
I never said that.
Or, if I did, I typed something incorrectly and I need to correct it - so please point it out to me.
My claim is that the fact that we are intentional agents with a reward system is the only thing required for a moral system. There are lots of things that evolution can account for that are not required for a moral system.
Evolution accounts for guilt, sympathy, anger, trust, and gratitude
But it does not tell us where we ought to feel guilty, or that we ought to feel sympathy, or when anger is appropriate, or who we ought to trust, or why trust is good, when people deserve gratitude.
Evolution accounts for our concern for our own and others' moral reputations:
But moral reputations are based on what people believe to be right, not what is right in fact. Sometimes, a person doing the right thing must destroy their moral reputation because others falsely believe that what they do is wrong. The person hiding slaves in a slave culture or Jews in Nazi Germany puts their moral reputation at risk.
Evolution accounts for why we sometimes prefer to rehabilitate vs. punish criminals:
But it does not tell us who deserves to be punished or who we ought to rehabilitate.
And so on.
Post a Comment