Thursday, May 31, 2007

Morality is Not Hard Wired

The Atheist Jew, in an article titled, “For the 50th time, morality is hardwired,” has pointed to an article in the Washington Post that purports to show further evidence that morality is hard-wired.

In this case, researchers asked subjects to imagine donating a sum of money to charity or keeping it for themselves. They reported that the thought of giving money "activated a primitive part of the brain that usually lights up in response to food or sex."

According to the author of the article, this suggests that morality is “basic to the brain, hard-wired and pleasurable.”

Desire utilitarianism holds that all actions, even moral actions, aim to fulfill the desires of the agent. The difference between moral and immoral actions is grounded on whether those actions would fulfill good or bad desires (which are desires that tend to fulfill or thwart other desires respectively). As such, it gives an important role to play for structures that are “basic to the brain”, including states associated with pleasure.

However, hard-wired?

Nothing in the experiment suggests that these traits, insofar as the concept of morality applies to them, are hard-wired (as opposed to soft-wired or programmed through social conditioning). In order to determine this, the authors would have to know how those states came into existence, and that they could not have been altered (redirected, strengthened, or weakened) through social factors.

It is interesting that the researchers would compare what they call moral behavior to eating and sex. They claim that they have discovered something about these three realms that make them similar. However, they also need to explain what makes them different. Why is altruism ‘moral’, but sex and eating typically ‘amoral’ and sometimes ‘immoral’?

If one of their subjects obtained the same pleasure in thinking about keeping the money for herself rather than donating it, what would prevent that from being moral?

In addition, if morality is hard-wired, then how do we account for the link between right and wrong actions, and praise and condemnation? The way that I link them is by saying that morality is concerned with desires that can be altered (created, destroyed, strengthened, weakened) by praise and condemnation. What do the advocates of a hard-wired morality offer as an explanation for this relationship?

I find it interesting that, towards the end of the article, the author investigates the option that knowing how the brain works could open up new moral possibilities. Specifically, it looks at the fact that we may be hard-wired with a preference to help those who are close to us, and be apathetic towards the fate of others who are distant. The suggestion is that this research may provide a way to give us a more general sense of altruism that extends to distant others.

This creates a problem for the thesis that morality is hardwired. If morality is hardwired, then a preference for those who are near to us is moral, and extending altruism to include those who are not next to us would be immoral.

Alternatively, if morality recommends some sort of altruism for those who are not near to us, and this general altruism is not hard-wired, then this shows that morality is not hard-wired.

Either way, the suggestion that we need to alter what is hardwired to do that which is moral creates serious problems for the thesis that morality is hardwired.

Grafman and others are using brain imaging and psychological experiments to study whether the brain has a built-in moral compass.

Before we go about trying to discover whether the brain has a built-in moral compass, let us first give some thought to the type of experiment that would reveal these facts?

If you find a metal rod with a point on one end laying on the ground, this would not be enough to argue that you have found a ‘compass’ of some type. Even if the rod was pointing north as it lay on the ground, this would not justify calling it a compass. Even if you pick it up, let it dangle from a rope, whereby you note that as it spins about it sometimes points north, it would be absurd to argue, “Here is an instant in which the rod is pointing north; therefore, it is a compass.”

In order for something to be a compass it has to reliably point north. This means that you have to know where north is by means of some other source, and you have to show that this alleged compass points in the same direction.

In order to determine if we have a moral compass, we have to know what moral is. In addition, we have to know the answer to this question by some means other than knowing the direction in which the alleged compass is pointing.

What these researchers are doing is comparable to a scientist dangling a metal rod from a string and saying, “This is a compass. It always points north.” Then, when they are asked what ‘north’ is, they answer, “North is whatever direction the rod points to at the time that you ask the question.” In this case, the claim that the compass always points north (or that we have a mental faculty for pointing out what is moral) is empty. It has been made true only by virtue of a very tight question-begging definition.

The more researchers learn, the more it appears the foundation of morality is empathy.

What is it that makes empathy moral?

Why is it not the case that empathy is immoral?

[S]ome wonder whether the very idea of morality is somehow degraded if it turns out to be just another evolutionary tool that nature uses to help species survive and propagate.

First, my objections have nothing to do with the idea that morality is somehow degraded if it turns out to be an evolutionary tool. Somebody might be just as tempted to argue that morality is degraded if it is described as relationships between malleable desires and other desires, which I defend. However, to say that something is ‘degraded’ is to make a value judgment. Making a value judgment requires a theory of value. Theories of value are what this whole debate is about. At best, such an argument begs the question.

Second, morality will never come down to “a tool that nature uses to help species survive and propagate.” This is because it cannot make it past a Euthyphro dilemma for hard-wired morality. If morality is hard-wired, then whatever helps a species to survive (or, more accurately, genes within that species, since genes are the natural units of selection, not species) is moral. If slaughtering those who do not share a particular gene helps that gene to survive, then reducing morality to what helps genes survive would argue for that this slaughter is morally justified, perhaps even obligatory.

When I read articles such as this, I call to mind an image of a group of scientists huddled around a table studying a grapefruit. Yet, when I listen to them speak, they keep talking about grapes. As they study their grapefruit, they say things like, “Grapes are about six inches in diameter,” and “Grapes grow individually on grape trees”.

I think that they are mistaken. However, I am not going to say, "By studying grapes you will lose the mystery and awe that we should hold towards grapes."

I’m going to say something like, “Um . . . doc . . . you know that round fruit you are studying? Well, I hate to tell you this, but that’s not a grape. That’s a grapefruit. Grapes are . . . well . . . something a bit different from what you are looking at.”

Or, in this case, “Um . . . doc . . . you know those basic brain states that you are studying? Well, I hate to tell you this, but that’s not morality. Those are desires. Morality is . . . well . . . something a bit different from what you’re looking at.”

[One] implication is that society might have to rethink how it judges immoral people.

The same objection applies here. This time, imagine a group of scientists studying a grapefruit, discovering that grapefruits are about 6 inches in diameter and grow individually on trees, telling us, “Society might have to rethink what it believes about grapes.”

How about, “You need to rethink the idea that what you are studying are grapes. If you were actually studying grapes, then you would discover that we do not have to rethink our beliefs about grapes at all – or very little.”

In fact, this claim suggests that these scientists are engaging in a bit of cherry-picking. They look at the grapefruit, notice the things that the grapefruit have in common with grapes. After all, they are both fruits. They ignore the differences between grapefruits and grapes. Then they proclaim, “Look at what we have discovered about grapes!” When others point out the differences between what they are studying and the common concept of ‘grape’, instead of making a mistake, these researchers claim, “This is all your fault. Obviously, you do not understand grapes as well as you thought you did.”

In another experiment . . . patients with damage to an area of the brain known as the ventromedial prefontal cortex lack the ability to feel their way to moral answers. When confronted with moral dilemmas, the brain-damaged patients coldly came up with "ends-justifies-the-means" answers.

And by what moral theory do you hold that ends do not justify the means?

Maybe ends do justify the means, and this ‘brain damage’ that you are studying unlocks a natural barrier that prevents people from doing that which is right. Maybe we evolved a basic disposition towards immorality – towards ‘feelings’ that get in the way of our making sound means-ends moral calculations.

How do you researchers know that this is not the better explanation?

Researchers cannot do this without having an independent standard for determining right from wrong. Do they want to trust instinct and natural inclinations? If they did this, then they would end up defending the morality of whatever prejudices exist in a society at the time.

Among a society of slave-owners, they would be able to show how the disposition to enslave others was wired into the brain – because, certainly, there is something going on in the brain of those who condone slavery.

In a racist society they would be able to look into the brain and see that there is something going on in the brain whenever people make racist judgments, and from this be able to conclude how the morality of racism was wired into the brain.

In a society where women are treated as property, there is almost certainly some brain state that can be measured associated with treating women as property, allowing researchers using this method to conclude that the morality of treating women as property is wired into the brain.

If we take the assumptions of those who claim to have found morality wired into the brain to their logical conclusions, these would be the types of conclusions that these researchers would end up defending.

Wednesday, May 30, 2007

Al Gore: The Assault on Reason

It appears that Al Gore does not fully appreciate people’s ability to believe what they want to believe in the face of contradictory evidence.

In his new book, “The Assault on Reason,” Gore suggests that members of the Bush Administration was repeatedly presented with evidence that Hussein had no weapons of mass destruction and that there was no evidence that he was involved in the 9/11 attacks. On the basis of having been presented with evidence, Gore asserts that the Bush Administration was aware of these facts and that it intentionally deceived the public about what it knew.

More generally, Gore writes his book using the assumption that people have a fundamental ability to reason. However, in order to reason correctly, people need true premises – that is to say, they need accurate information. The ‘assault on reason’ that Gore write about is an assault on the institutions for providing the people with true premises (facts).

This assault on reason comes from corporate interests with the capacity to profit by misinforming the American people. Those same interests have control of the media through which they can promulgate their misinformation. Plus, they have purchased and put into place a administration that endorses the same policies of advancement through campaigns of misinformation and misdirection.

It is difficult to fix a problem if we do not accurately diagnose it to start with. I would like to suggest that the “assault on reason” is not limited to a failure to provide the people with facts. It also involves an inability to deal with those facts reasonably.

I would like to offer the Creation Museum that opened earlier this week in Kentucky as Exhibit A. This museum illustrates how easy it is for people to believe the most absurd things, and to continue to believe then in the face of overwhelming evidence to the contrary. The best explanation for the observation that the Creation Museum has been built is not one that involves a conspiracy by people who seek to profit through deceiving the public. The best explanation is that a lot of people lack the fundamental ability to reason – even with complete access to all of the relevant facts.

[Note: I really enjoy this whole scientific method thing. Observation. Theory. Prediction. Test the prediction. Prediction fails. Offer an new theory or make modifications to the existing theory. I think people ought to use it more often.]

For example, one of the events that Gore wrote about was a request the Bush Administration sent to the intelligence community to find evidence linking Saddam Hussein to the attack on 9/11. When the report came in that there was no evidence of such a link and reason to believe that Hussein represented just the type of secular Arab government that the theocratic Al Queida despised, the Administration sent the report back with the comment, “Wrong answer.”

Gore seeks to interpret this as indicating that there was a group of people in the Bush Administration that did not care about the fact of the matter as to whether Hussein’s involvement with 9/11 was true or false. They wanted to present it as being true even while they knew it to be false, and needed a report that fit their policy.

I would like to suggest a different model for interpreting these events. To see how this model works, imagine the organizers and financiers of the Creation Museum going to the National Academy of Sciences and saying, “We want a report showing that there were dinosaurs on the Arc.” The National Academy sends back a report that says that the dinosaurs died off 65,000,000 years ago and could not possibly be on the Arc, if there was an Arc, which itself is a dubious proposition. The Creation Museum then hands back the report saying, “Wrong answer.”

The comment in the case of the Creation Museum does not mean, “Okay, I know that the evidence doesn’t support my position. However, I want you to come back with a report that does support my position because I want to use it to mislead others.” In this case, “Wrong answer” means, “I have a different and far more reliable source of information. If your report contradicts my more reliable source then this proves that you have not done a good job preparing your report. Get it right.”

The question is: Which view best explains and predicts the Bush Administration’s behavior.

It may be difficult to believe that people can be so deaf to such strong evidence that contradicts their views. However, the presence of the Creation Museum itself stands as clear testimony to the possibility.

Gore also reported that members of the intelligence community felt intimidated not to turn in information that the Administration did not want to hear. People who did so felt that their career – their chance for advancement – was put at risk. Gore explains these observations by suggesting that an administration willing to mislead the public to promote a sinister agenda was willing to intimidate anybody who said anything that they did not want to reach the public.

My alternative explanation suggests a different attitude. “We know that Saddam Hussein has weapons of mass destruction and was tied to the 9/11 attacks. We know this from a higher and more reliable source (sometimes described as ‘Bush’s Gut’ though often given additional weight by calling it the voice of God or Jesus). If you are too incompetent to find the links that we know are there, then you have proved that you are not a very good agent. Clearly, we are not in the habit of promoting incompetent agents such as yourself.”

If it becomes too difficult to believe that a large number of agents are too incompetent to find the links, the next step is to accuse them of belong to some sort of conspiracy that is not actually concerned with defending America and Americans from attack. Since the Administration ‘knows’ about the weapons of mass destruction and the ties to Al Qaeda, one can only conclude that people who are hiding the information have hidden sympathies for the terrorists and, perhaps, want them to succeed. At best, those who fail to find the link are too lazy to concern themselves with the implications (in the form of a mushroom cloud) of their failure.

Again, we see this type of thinking from people like those who constructed the Creation Museum. Scientists who do not come back with evidence of dinosaurs on the Arc or poor scientists. Or, if it is too difficult to believe that so many scientists could be so poor at their job, then they are involved in a liberal alliance working against God by pushing an atheist agenda. As agents of the dark side, they are intentionally trying to bury or distort evidence that does not serve their political ends.

There is another pair of options to mention supporting the thesis that the problem rests in a failure to reason. The campaigns to bury the facts have buried them from everybody equally. However, we clearly see that the tendency to believe that which is unreasonable to believe affects the same people. Those who fail to reach the reasonable conclusions regarding evolution and the age of the earth are the same people who fail to reach reasonable conclusions regarding global warming, which are the same people who continue to believe that Hussein was responsible for 9/11 and had weapons of mass destruction.

The same inability to reason infects all three issues. Those who cannot deal rationally with the evidence on one issue, it seems, also cannot deal rationally with the evidence on the others.

I will offer a cautionary remark. At this point, I am at risk of cherry-picking my own data. I may well have selected these three examples because it supported my thesis, and simply tossed out (or failed to consider) examples that would have contradicted my thesis. So, there is a weakness on this point.

Still, the human capacity to draw absurd conclusions in the face of clear public evidence to the contrary is an observed fact. A solution that says that all we need is to present people with the facts, and they will certainly draw the most reasonable conclusion, is idealistic at best.

Tuesday, May 29, 2007

Notoriety

It appears that the Wikipedia people want to delete the newly created site on Desire Utilitarianism. The reason is because it is not notorious enough. There are not enough people talking about it to give it the type of ‘star power’ that makes it worthy of a Wikipedia entry.

The alternative view, used by those who are defending the page, is that it should be included because the theory is well-developed and defensible.

One implication that I would like to note is that, if the criteria for including the page is notoriety, then I would correct the default by engaging in some sort of publicity stunt that would get me and the ideas that I present into the news. Whereas if the criterion for including the page is the quality of the position and the arguments offered in its defense I would correct the default by seeking to do a better job developing the theory.

However, I did not bring this up to protest about how I am being unfairly treated, that Wikipedia is seeking to impose censorship on ideas that they consider unworthy of including, or any other nonsense. I object to the habit that some people have of stomping their feet and crying, “I’m being oppressed!” simply because others did not put them on a pedestal.

The reason that I bring this up is to discuss a topic that has been rattling around in my skull for the past couple of decades which I do not know how to resolve – the issue of marketing.

Package and Product

I want to make sure that I fit in here a note that I am humbled and honored by those who have made an effort to put this page on Wikipedia. They have put a surprising amount of effort into defending their actions. In a sense, I am concerned that I may not have done what I should to be worthy of these efforts, by giving them what it would have taken to make their work easier.

Seriously, take a look at this site. Take a look at the length of my posts and the way that I write. It should be obvious that, among my various desires and aversions, that I do not have a particularly strong interest in packaging. I create essays, and I throw them up onto a rather traditional page, then I start working on the next article.

However, it is packaging that sells product. At least, it is an extremely important factor.

Those who created the Creation Museum understood this fact. Much of the harm that the Museum and those who built it will have is due to the fact that they have put nonsense in a pretty package, and packaging will sell the idea.

All major religions use the principle that the packaging sells the product, whether they acknowledge it or not. The package comes in the form of religious architecture, art, music, and ceremonies.

Moral Issues

The intellectual part of my brain (regarding beliefs) is well aware of the importance of packaging. I am not so foolish that I think that, “If you build it, they will come,” is a good business plan.

At the same time, affective part of my brain (regarding desires) does not care about these facts. In fact, I have an aversion to using packaging to sell my product. Again, look around at this web site. Isn’t it obvious?

I reject the very practice of using packaging to sell an idea. “Proposition P was presented to me in a pretty package; therefore, P is true.” This is nonsense. However, this nonsense is a fact about how the real world works. Could I, in good conscience, use a method of selling product that I find not only distasteful but morally objectionable? I want to present my arguments, and have people accept or reject my position on those standards.

Wikipedia’s concern with notoriety, however, exposes a flip side of this coin. Just as there are people who think, “The proposition that P was presented to me in a pretty packages; therefore, P is true,” there are people who think, “Proposition P was presented to me in an ugly passage, so P is false.” If I know that people are going to do this, then is there not some principle of morality, or at least of prudence, to put that product in a package that is at least good enough to get past the doorman?

Quality of Content

Of course, when it comes to selling my own product, one of the things that I constantly worry about is whether it is worth buying.

This is another problem that I have with trying to sell product by putting it in a better package. What if it is not good enough to buy? I have seen far too many people sell ideas that are simply too stupid to be worth a presentation – such as the Creation Museum. I certainly have no interest in taking people such as those as my role model.

I have mentioned in earlier posts that, to deal with the issue of quality, the best option that I see is to simply put my ideas out into the public and to see if anybody raises objections to them. However, in doing this, I want people to accept or reject the ideas that I propose based on the quality of the defense that I can put up. I do not want them accepting the position on the basis of a pretty package. Then again, I do not want then ignoring the position on the basis of an ugly package.

Yes, I think that the positions that I defend are correct – though I am more confident of some than of others. However, a decent respect for others as thinking beings is not to assume that I am correct and use whatever methods will ‘work’ to convince others to accept what I claim. A decent respect for others requires limiting myself to presenting my reasons, and letting those with reasons against my view a chance to speak to those reasons.

Dividends

Quite often, I think to myself, “Alonzo, it is time for you to start to put some effort into marketing your ideas. Just writing them up and putting them on a web site is not going to do you any good.”

After all, what I would really like to be doing with my time is working on these projects full-time. I could easily spend a day, where I get up and start work at 4:30 in the morning, and turn my laptop off and go to sleep at midnight, working on the type of subjects that I write about here. I am envious that some people get to do this. However, I notice that the job tends to go more to people who are showmen first, more interested in ratings than in the accuracy and sensibility of what they say.

The Creation Museum can collect $27 million for nonsense. One would think that sense could generate a bit more income.

With these particular concerns, I was actually saddened to read that one concern being voiced in favor of deleting the entry on desire utilitarianism is because I have focused my attention on the quality of my arguments rather than the quality of my marketing.

Even now, I call to mind (once again) a list of things that I could do to market my ideas better. And I simply do not wish to do so.

Yet, it costs me.

Monday, May 28, 2007

Doug Giles: Atheist Theft of Christian Morality

Doug Giles has an article in TownHall.com called, “Hey Atheists . . . Get Your Own Moral Code,” where he asserts that atheists borrow their morality from Christianity. Of course, atheists cannot have their own morality – morality comes from God and atheists have no God. So, the only option left is to borrow somebody else’s.

Actually, Giles gets the cart before the horse in this case. Every moral code – including the moral principles written into the Bible – were invented by humans. Some humans saw fit to assign their moral beliefs to a God as a way of saying, “It’s not me saying this – I can understand why you would not want to listen to me. It’s God saying this, and God will visit great suffering upon you if you should dare disobey me . . . I mean . . . him . . . if you disobey him.”

We can get a hint that religious ethics were invented by humans and assigned to God by their content.

When Moses came down from the mountain with the 10 Commandments, he found the people he had left worshipping a golden calf, so he slaughtered 3,000 of them. Of course he said that their sin was not following God. However, it seems that a benevolent God could think of something better than a slaughter to get the point across. On the other hand, a self-important mortal without divine powers angry at people who were serving some other so-called priest might face a more limited set of options.

Killing a child who talks back to his parents; some ancient scribe must have been particularly angry at his child that day. In fact, I can easily an abusive parent – a pillar of the community, no doubt – losing his temper and killing his child, wanting some way to keep the villagers from turning against him. So, he tells them, “God said that I can kill my child. In fact, rather than condemn me, God said that I should be able to bring my child before you and you should kill him.”

This is not to say that everything in the Bible is mistaken. If, as I have written, morality is a rational attempt to promote good desires and inhibit bad desires, it would be quite surprising if people 2,000 years ago got everything wrong.

In the realm of physics, this is not the case. The ancient Greeks and Romans knew that the Earth was round and that things were made up of atoms. Of course, they got the details wrong. However, they went surprisingly far with what they had.

Similarly, it is not unreasonable to hold that ancient cultures could also recognize that they would be better off if others simply told the truth – so they invented a commandment against bearing false witness, and claimed that this was one of the rules that came from God. They certainly did not want their neighbors to be sneaking into their house and killing them – or those they cared about – so they promulgated a moral prescription against killing.

Of course, ancient writers would also realize that they needed a way to make sure that people obeyed these rules, even when they could not get caught. One simple solution is to tell them, “Even if you kill me while I am alone and my back is turned, God sees all, and will make sure that you are punished in the afterlife.”

Actually, I do not believe that there was a great historical conspiracy by a bunch of atheists to promulgate a god belief that the conspirators knew to be false. The people who promulgated this belief were victims of their own ignorance as well. Yet, this does not change the fact that religion (and religious morality) is invented, and that the inventions that showed the greatest promise would be those that caught on. Along with some twisting and manipulating by those with power to help make sure that they stayed in power.

The Enlightenment

The Enlightenment, from which we got the principles written into our Declaration of Independence and Constitution, was perhaps the greatest borrowing of all time. In the 17th and early part of the 18th centuries, philosophers tried to do to morality what they had done to physics and chemistry – tried to discover a set of ‘natural laws’ without looking at scripture.

John Locke, for example, did not quote scripture when he derived his two Treatise on Civil Government. He turned to reason – imaging humans in a ‘state of nature’, and looking for the moral laws that would govern people in such a state. He showed that one can take moral philosophy a significant distance without looking to scripture, coming up with a set of principles that made far more sense than anything the Church had been saying for countless centuries.

Locke wrote, for example, that if we look at man in a 'state of nature', we can see them coming together. However, reason finds no clear and unambiguous mark that identifies one as 'he who has a right to rule' and another as 'he who has a duty to obey'. Without such a mark, reason tells us that all people in a state of nature are morally and politically equal. When they come together and form a civil government it derives its power from the consent of the governed, not from some natural right to rule for some and natural duty to obey for others.

Locke was a religious man, but his morality and his methods were firmly grounded on enlightenment principles of reason over faith. When he was done, he, and other theists, took those conclusions and assigned them to God.

Another moral system, invented by humans, and assigned to God.

Actually, if the dispute over whether moral truths that can be known by studying the real world were put there by God or emerged through natural process, I would have a different response. This would be like a group of people in the grips of a famine spending the time they should be spending in harvesting food debating whether food comes from God or evolution. Debate this in your spare time as you wish. However, regardless of how they got here, let’s at least make sure we harvest enough to survive.

Giles, however, has no interest in the possibilities of theists and atheists deriving the same moral principles by reason – the same way that we derive the same periodic table of elements by reason, wondering only where it came from, but not what it says. He has so much hatred that he needs to see a difference – a condescending and denigrating difference – between their moral views.

Slavery

Another significant place where religious ethics borrowed from an outside source is on the issue of slavery. Many abolitionists were religious. However, they did not find their arguments against slavery in the Bible. If those arguments were in the Bible, then where had they been hiding for the previous 2,000 years? Why is it that the Christians living in the Roman Empire never heard that slavery was wrong?

In fact, those who defended slavery were able to find substantial support for their position in scripture. If God was against slavery, then why were there so many passages in the Bible where God commanded people to take slaves? Why were there passages on the proper care and feeding of slaves? Why instead of a commandment that says, “You should not own another human as if he were cattle,” there was a commandment that said that, “You should not even have your slaves do work on the Sabbath?”

The argument for the abolition of slavery comes from the same root as the argument for government from the consent of the governed - that reason does not see in nature any natural and unambiguous mark identifying one person as a right to rule and another the duty to obey. It does not matter that scripture tells us of such a right, we find no such mark in nature.

Once again, there was no appeal to scripture in this argument. It was an appeal to reason alone. Once reason showed the immorality of slavery, theists took the lessons of reason and assigned them to God.

Giles then comes along and asks, “Why do atheists seem to follow Christian morality?”

The answer is, “Because what passes for Christian morality these days is non-scripture based morality adopted because of its reasonableness and then assigned to God.

The Future

People are going to continue to adopt secular moral principles. The reason they will do so because people who look at morality through the light of reason are finding real-world solutions to real-world problems. People will see the way these principles will make their lives better, and will adopt them.

Those who do not wish to give up their religion – who also want to think of an afterlife and a God – will continue to mix the two. They will take reason-based morality and continue to assign it to God, ignoring whatever elements of scripture contradict morality and asserting that those fragments of truth found in the Bible are proof that morality came from God.

In short, people such as Giles are like the apprentice, stealing the masterpiece of a much more skilled student, holding it out in public, and saying, “Look what I did!” Not only does he fraudulently claim to be a master craftsman, but he claims himself to be the moral superior of those he robs from as well.

Sunday, May 27, 2007

Rally for Reason

A $27 million Creation Museum opens Monday in Boone County, Kentucky, and a group calling itself “Rally for Reason” will be there to . . . well . . . to rally for reason, I suppose.

In reading about the museum and the rally for reason, I have noticed a few interesting topics of discussion.

For example, should there be such a demonstration at all?

Claim against: a demonstration against such a museum simply gives it legitimacy.

Okay, imagine that you are on trial for a crime you did not commit. The prosecution stands up and spends the day presenting the evidence against you. This includes, of course, the eye-witness testimony of people who have claimed to have seen you at the scene of the crime committing the act, circumstantial evidence, and physical evidence misinterpreted to fit the prosecutor’s theory that you are guilty.

After this is done, your defense attorney stands up and says, “The defense rests, your honor. For me to stand here and refute everything the prosecutor has just put before you would simply give it an air of legitimacy that it does not deserve.”

What do you think? Good strategy?

If that’s such a good strategy, then I am surprised that we do not see more of it in today’s court rooms.

The fact is, the position that this rally is against already has legitimacy. Furthermore, the cost of the museum alone is going to give the presentation an air of legitimacy. Particularly in the mind of a young child, everything in the museum is going to seem real. The fact that they are seeing it in the context of trusted parents and other authority figures is going to give it a cloak of legitimacy that will transcend resent far into the future – quite possibly for the rest of that child’s life.

Opening day is the day in which the Museum will get the most publicity. Opening day is the day in which it is important that those reporting on the event note, even if in passing, “Plus there were these people protesting that the Museum does not give an accurate account of the scientific findings.”

In standing in opposition to the Creation Museum and what its founders are trying to accomplish, there are two stands that one could take.

One form of objection is to argue the factual claims. “The Creation Museum says X. We say not-X. Here is the scientific evidence for not-X.” This debate has been going on for as long as I can remember, and does not seem to be going anywhere. It is time, I think, for those who use this strategy to ask why, and to ask if there is something that they can do differently that would be more effective.

The other form of objection (which is not independent of the first, but which builds upon it) is to point out the moral faults of those who organized this project, and those who financed it.

When I was in high school, and I decided that when I died I wished to leave the world better than it would have otherwise been, I knew that my first goal was to find out what ‘better’ was. I was surrounded by people who claimed to know, even though they were adamantly opposed by people who were just as certain that they were wrong. I had made my oath during an American History class, where we were discussing the Civil War. In that war, 250,000 Southern soldiers died, though not before killing 350,000 northerners, on the certainty that they were right.

The first of the moral rules that I yielded to when I started my project was the duty to intellectual responsibility. If I was going to advocate policies that affected the lives, health, and well-being of real people, then I had an obligation to give those policies close scrutiny.

I spent 12 years in college studying different aspects of value theory, trying to make sure that I got the facts right. I felt that this was necessary before I was even qualified to write a blog such as this one.

If only the organizers and financers of the Creation Museum had felt anything like intellectual responsibility – enough to consult experts in the field and to say, “It is my moral responsibility to make sure that I get the facts right. So, tell me, have I met my moral responsibility in this case?”

The answer, of course, is not only, “No,” but, “Hell no!”

Of course, they claim to have consulted their own experts. However, their view of ‘experts’ is quite at odds with the concept of intellectual responsibility. In other words, it is not the view of ‘experts’ that a person would adopt if that person wanted to make sure that he did no harm to innocent people. Their view of ‘expert’ is like the view of a company wanting to poor a substance into the ground water that risked making thousands of people sick and killing some.

This is a view that says, “Canvass everybody in the field and, if you can find even one who says that the chemicals we dump into the soil are harmless, then that is the one we will listen to. We will ignore everybody else.”

This is, of course, the view of ‘expert’ drawn by somebody who sees nothing wrong with taking the lives of innocent people, as long as it puts money in his pocket to do so.

The organizers and financiers of the Creation Museum are, in fact, demonstrating a similar lack of concern with who lives and who dies. Except, they will cause their deaths, not by polluting the groundwater, but by polluting minds, making people less able to understand – thus, less able to explain and predict – a real world that is notorious for its ability to kill people without a second thought.

The people that this Creation Museum target for condemnation are people who keep over 6 billion people fed, and who have found cures for whole sets of diseases that would otherwise have taken the lives of even those who financed the Museum. It is, indeed, quite hypocritical of these people who condemn not only science but scientists – asserting that they are the root of all evil in this country. While, at the same time, they use what these scientists have learned to extend their own lives and preserve their own health.

They would show more sincerity and integrity if, while they lay on the operating table, before they go under the knife, they tell the doctor what they think of the ‘scientific method’ and the reliability of the knowledge this method makes available.

The scientific community could do much better if it had a larger pool of minds to draw from for making future scientists. They would also do much better of those who did not go into science, but who went into teaching, or went into politics, understood the real world well enough to teach others how it worked and to use that knowledge in making real-world decisions. However, scientists have to draw their raw material from a resource that has been polluted by individuals seeped in their own ignorance and superstition.

So, as a matter of fact, more people will die, and more people will suffer the debilitating effects of disease, because polluted minds are not able to make sound policy.

At this point I think it is important to add, in case somebody never caught my earlier posts, that there is no moral sense to be made in turning somebody away from nonsense to sense by use of legal prohibitions in believing nonsense. Rather, a decent respect to human dignity requires that we limit ourselves to gentler forms of persuasion. Accordingly, those who built the Museum certainly had a right to do so, since the money came from their own pockets.

Moral philosophers have long recognized the difference between liberty and license. Liberty does not mean, “Whatever I do cannot be wrong.” Liberty means, “The wrong that I do may only be met by private censure, and not by public law.” The right to freedom of speech includes the right to say, “You’re wrong. Not only are you wrong, but you are being morally reckless. You are showing a disregard for the well-being of others that would shame any decent person.”

That is the case with those who have organized and financed this Creation Museum.

It is one thing to tell the public that these people are wrong. That is only a part of the message. The other part is to make an example of them – to let the people know, “These people are shamefully wrong, and no decent person would ever want to be like them.”

Saturday, May 26, 2007

Neil DeGrasse Tyson: Meaning

In Neil DeGrasse Tyson’s second presentation at Beyond Belief 2006, he spoke about the process of assigning meaning to things.

He spoke specifically about his attitude when he is on a mountain top with a telescope attempting to discover what is going on at the center of the universe. He attempted to express a particular sense of awe and wonder at capturing a photon that left the region around the center of the universe 30,000 years ago, travelled all this distance past stars and through interstellar dust clouds, to crash into his digital detector carrying information about the center of the universe 30,000 years ago.

As it turns out, Tyson picked something that is particularly easy for me to relate to. When I was young, one of my first interests was astronomy. I bought myself a telescope, and I read avidly about the subject. I remember seeing a map of the galaxy that showed what we knew then of the spiral arms. Only, the region towards the center of the Milky Way was not mapped, because our techniques did not allow us to gain information from that direction.

So I assumed, with sadness, that I would not get to see what the center of our Milky Way looked like.

I was wrong. Scientists have since looked at wavelengths of radiation that can make it through the dust and clouds from the center of the Milky Way. Just within this last year I have seen the images that were created, and computer simulations created out of that data. It turns out that there is a black hole with 3 million times the mass of our sun, and stars that orbit it like comets orbit our sun, coming in close, then zipping back out. Whole suns, acting like comets.

Actually, these time-lapse simulations tell me what I really wanted to know about the center of the Milky Way. Though I would definitely like the opportunity to look on it with my own eyes, my real interest was in knowing what was happening, which the simulations tell me far more accurately than I could ever learn from direct observation.

Tyson wants somebody to attach electrodes to his head (or do similar experiments) to determine if the sense that he gets when he marvels at the fact of his studying the center of the universe is anything like a religious experience. If it is, he says, then this is something that he, as an educator, can offer people.

However, Tyson’s experience will have one important quality that will separate it from any religious experience. Tyson’s experience relates to real-world events. It is an appreciation of something that is actually happening – a part of the real world.

In earlier posts, I have compared the so-called ‘meaning’ that a person gets from religion to The Matrix or some sort of experience machine. It is like the person who is tremendously proud of himself as he (imagines himself) doing great things in providing the starving and thirsty people of a remote village with clean water and nourishing food. Only, the ‘good’ this person does is purely illusionary. There is no village. There are no villages. There is no food or water. The agent is simply laying in a cot with a machine hooked up to his head making him believe he is providing villagers with food and water.

Yet, our agent refuses to be woken up from this illusion because, he claims, it gives his life meaning. He would see himself and his life as a waste if he were doing something other than providing villagers with clean water and nourishing food.

But you are NOT providing anybody with clean water and nourishing food! You are laying in a cot doing absolutely nothing, merely imagining yourself providing villagers with clean water and nourishing food.

And so no person serves God. “You are only imagining yourself to be serving God when, in fact, you are promoting ancient myths and superstitions that, for the most part, are not even good for the people you convince to accept them. This has the moral quality of warning a group of thirsty people to stay away from clean drinking water to please God, or to force starving people to sacrifice their food at a religious ceremony.”

Even here, we may find some who report to be providing food and medicine to the poor for religious reasons. However, we can divide these up into two groups, depending on how they answer one simple question.

“If there was no God, would you still continue to feed the hungry and cure the sick?”

If the person says, “No. If there is no God then everything is worthless. The only reason that I feed the hungry is because God wants me to – so, if there is no God, then I have no reason to feed the hungry.”

People such as this live a meaningless existence. If they are alive, aiding the poor and the sick, then this is a mere accident. What they truly want is something that they can never achieve in the real world. What they have the capacity to achieve in the real world is something that they do not care about.

Others might answer, “Yes. Even if no God exists, I would still feed the hungry and cure the sick, because . . . well, because the sick and the hungry are suffering.”

These people can leave a meaningful life, even though they claim to be serving God. This is because something that they really want – to feed the hungry and cure the sick – is something that they can accomplish in the real world. They are never going to get the God thing they always wanted. The only potential problem is if their religious beliefs prevent them from effectively feeding the hungry and curing the sick. If their religion bans the eating of foods that the hungry are in need of, or prohibits medical procedures that the sick need, then, even here, the religion gets in the way of a meaningful life more than it contributes to such a life.

Specifically, the person who lives a truly meaningful life is somebody who does something real with his life. The person who spends his life plugged into a fiction – to whatever degree his life depends on the fiction – is somebody whose life is wasted.

I am discouraged, to some extent, when I hear atheists answer the challenge to provide meaning to one’s life by being defensive. “We atheists find meaning in life. Honest we do. Over here. I’ll show you. Here, we have meaning. See? Can’t you see how meaningful this life is?”

Of course, to the person whose ‘meaning’ is locked into having some sort of relationship with God the answer is, ‘No.’ They will not see meaning in such things unless it is something that they already care about.

I would argue that a far more meaningful response (pardon the pun) is to point out how worthless a life is if it is spent hooked up to a lie. To spend a person’s one and only life hooked up to a lie, without getting a chance to do anything real, truly is a wasted life. Indeed, it is about the only way that a life can truly be wasted.

Of course, it is possible for a theist to still find value in things that are real. However, it is possible to find value in real things as well. So, between these two people, there is not much of a difference in their capacity to have a meaningful life. Those who have truly wasted the one and only life they will ever have are those who spent that life dedicated to a lie and, in particular, a lie that causes harm to others.

Unless you are talking about something real, it does not even make sense to start talking about something being meaningful.

Friday, May 25, 2007

VS Ramachandran: Cognitive Illusions

We are in the final hour of Beyond Belief 2006, which has been the subject of my weekend posts for the last four months. The conference is starting to wrap up, so when Ramachandran gets up to give his second presentation of the conference, it was with the intention of responding to the claims others had made that he thinks he has an important response to.

In Ramachandran’s first presentation, which I discussed in “Brain States” he spoke about optical illusions – about how the brain is hard wired so as to make certain mistakes in perception. These mistakes are caused by using a trait that works particularly well in the world we typically live in, and using them to interpret events in a modified representation of that world. For example, those traits allow us to accurately perceive three-dimensional objects, but to make mistakes when perceiving two-dimensional representations of those same objects.

This had similarities to the topic of a presentation by Mahzarin Banaji, which I discussed in, “Bugs of the Mind”, she talked about prejudice, on how experience can cause us to form a habit of categorizing things in a particular way.

Ramachandran pointed out that these associations are not necessarily irrational. He asserts that our brain is constantly looking to form associations. If, as a matter of experience, we find women often associated with cooking, then our brain is going to get into the habit of associating women with cooking.

This is true in the same way that, if we listen to a Top 40 radio station during a particular summer job, and end up listening to a particular set of songs that was popular at the time, we will continue to associate that song with memories of that job long into the future. There is no necessary, logical relationship between those songs and that job. However, the association will be easy. Just as there is no necessary, logical relationship between cooking and women. However, repeated exposure to this relationship will make it easy for that relationship to come to mind.

The mistake we make is when we take these relationships as moral principles – as the way things should be. For example, we get used to associating women with cooking, so we draw the conclusion that cooking should be done by women, and then act so as to push women into that role. This is as foolish as using a relationship between a song and a particular job to conclude that one should be performing that job whenever one listens to that song.

However, Ramachandran went on to discuss other cognitive mistakes that we make in a context that made it clear that they can be and often are prejudicial (harmful) to others. For example, he recounted a story in which he and a friend were walking when they noticed a black man behind them. Shortly thereafter they found themselves walking faster (out of fear).

This started a discussion (continued in the question and answer session after Ramachandran’s presentation) about how the brain processes statistical information.

An unidentified member of the audience pointed out that even if black men are more likely to mug people than Caucasian men, since there are more Caucasians than blacks, one is still more likely to be mugged by a Caucasian man than a black man.

Another unidentified member of the audience pointed out that for every incidence of being followed by a black man that resulted in a mugging, there are billions of incidences of being followed by a black man that did not result in a mugging. At that moment there were probably countless threats that were statistically more likely than being mugged (such as tripping on the sidewalk) that the pair were irrationally perceiving as less significant than the threat of being mugged.

The fact is, the human brain is not very good at processing statistical information and making rational choices. For example, I am currently reading Al Gore’s new book The Assault on Reason. He identified research that showed that people are willing to pay more for insurance against terrorism than they are for insurance against all causes of harm. This is in spite of the fact that ‘insurance against all causes of harm’ would cover terrorism.

The word ‘terrorism’ generates a fear response which exaggerates the risk of harm in the minds of many people, causing them to irrationally evaluate the risk of harm from terrorism as greater than the risk of harm from all causes.

President Bush has bragged that he thinks with his gut. He has also spoken as if he thinks that his ‘gut’ is transmitting messages from Jesus, guiding him as to the correct course of action. What is happening in fact is that he is ‘listening’ to these cognitive illusions, and giving these cognitive illusions the status of unshakable truth, against which he then evaluates the evidence.

In the case of the agent buying a ticket above, somebody who thinks like Bush would take the fear generated by the word ‘terrorism’ to conclude that the risk of terrorism is greater than the risk of harm from all causes (including terrorism). This would be taken as a message from God that he then needs to focus his efforts on protecting the country from terrorism – which is greater than his need to protect the country from harm from all causes (including terrorism). Any suggestion that this course of action is irrational would then be dismissed, since it conflicts with God’s word.

Such is the case when we have leaders who think with their gut.

Now, there are cases where it is appropriate to think with one’s gut. We evolved these dispositions for a reason. When we face a sudden crisis that demands immediate action, we simply do not have time to think everything through rationally. We must act immediately. In these cases, a quick ‘rule of thumb’ which is fast but sometimes wrong is much better than reasoned deliberation which is much slower though less likely to be wrong.

The ‘cognitive illusions’ that Ramachandran is talking about come about by taking these rules of thumb meant for emergency situations and assumes that their conclusions are more reliable than the conclusions of deliberation and reason. This is as much of a mistake as taking the input from an optical sense designed to work in a 3D universe and applies it to a 2D picture.

This explains many of the mistakes that Bush has made in his tenure as President. Unfortunately, we are living under a President who is too stupid to even comprehend these facts – let alone allow them into his decision making. He simply cannot understand how or why a reasoned conclusion can contradict a gut feeling and still be right. So, unless reason actually has the power to change his gut feeling, he will not see reason to change his mind on a policy.

Yet, reason does not have this power.

We have seen, with optical illusions, that even when an agent knows that this is an illusion, he does not suddenly ‘perceive’ the object correctly. He still sees the illusion. The best that reason gives him the power to do is to say, “In these types of circumstances, I know that I cannot trust what I see. These are cases where I need to trust reason even more than I trust what is right before my eyes.”

Similarly, reason does not have the power to change one’s gut instinct. At best, it will show when gut instinct cannot be trusted. The gut will still be there demanding that the agent do X, while reason suggests not-X. In these circumstances, somebody like President Bush, who lacks the ability to grasp the distinction, will choose the wrong option, and the whole nation will suffer for it.

One thing that we could really use are leaders who not only understand these elements of human psychology, but who have the moral character not to exploit them for personal gain.

Thursday, May 24, 2007

Game Theory and Desire Utilitarianism

Announcement

I have discovered that somebody saw fit to write a philosophy stub on Wikipedia on Desire Utilitarianism. I am honored by whomever did this. My policy shall be not to touch the Wikipedia site (which, as I understand it, is also a Wikipedia rule), but allow others express their interpretation through the edits they make.

Main Article

Yesterday, I presented some problems with game theory as an account of morality. Specifically, I argued that game theory cannot handle issues of rare but extreme benefit, anonymous defection, or differences in power. Today, I want to explain how desire utilitarianism can handle the same issues.

Game Theory Review

Imagine that we are playing a game. Each turn, we must each pick a crystal ball (C) or a dark ball (D). If we both pick (C), we both get 8 points. If we both pick D we get 2 points. If I pick C and you pick D, then I get 0 points and you get 10 points, and vice versa.

In any one round, if you pick C then I am better off picking D (10 points vs 8). If you pick D then I am still better off picking D (2 points vs 0 points). However, you are in the same situation. Therefore, you are also better off picking D, no matter what I choose. However, if we both pick 2, we get only 2 points. If we both pick C, we get 8 points. If we are both going to pick the same thing, it would be better to both pick C than to pick D. But, how do we get each other to pick C?

Game theory suggests solving this problem, at least in cases where we are going to play this game through multiple turns. One of the winning and stable strategies is called tit-for-tat.' This strategy says, “Pick C on the first turn. Then, for every subsequent turn, do what your opponent did on the previous turn. If he picked D, then you pick D. And if he picked C, then you pick C. He should learn soon enough that he (and you) is better off picking C every turn.”

Yesterday, I argued that the principle that sits at the root of game theory, “Do that which will get you the most points,” does not suggest a tit-for-tat strategy under several real-world assumptions. It suggests exploiting rare situations with an unusually high potential payoff. It also recommends anonymous deception where possible, and exploiting power relationships to force opponents to make moves that are not to his advantage.

Desire Utilitarianism: The Choice of Malleable Desires

Desire Utilitarianism would suggest an alternative strategy. “Use social forces to add a malleable desire that I choose C (or desire to cooperate) worth 3 points to both players.” Now, if you pick C, I should pick C (11 points vs 10). If you pick D then I should still pick C (3 points vs 2). The same is true for you. Under this moral impulse, we both get 11 points.

I need to explain the role of malleable desires in the formula above. Evolutionary biologists might want to investigate the possibility of a natural desire to cooperate that evolved due to its survival value. Their success or failure will have nothing directly to say about morality. They will only be able to study a genetic influence that mimics morality.

This is true in the same way that an ant, finding a dead moth and dragging it back to the colony, merely mimics altruism. It is no more an example of genuine altruism than an animal dropping manure on the grass is a case of altruism for providing a benefit to the grass. Morality itself requires an element of social and individual choice. Since we cannot choose our genes, we do not deserve either moral praise or moral condemnation for the result.

Choosing Desires

In our hypothetical game, you and I have an option to promote an institution that will use praise and condemnation to promote a universal desire to cooperate that gives cooperation 3 points of added value.

If we succeed, then I will have no need to worry that you will take advantage of an opportunity to engage in anonymous defection or abuse of power. Even if you have an opportunity to anonymously defect, to choose D under circumstances where I would not discover it, you will still not choose D, because C is what you want.

Nor will I need to fear what you would do if you were in a position of power. Certainly, that power would give you the opportunity to do whatever you want – to choose D without fear of retaliation. However, if you do not wish to choose D – if you should have a preference for C – then this possibility of choosing D is not a real threat.

Even the problem of rare but exceptional benefit can be mitigated by a desire for C, so long as the desire for C was higher than the value of whatever exceptional benefit the rare circumstance provided. It may still be the case that everyone has a price at which they can be bought, but with a sufficiently strong desire for C, that price may not be reachable in the real world.

We get a mirror set of effects for using social institutions to promote an aversion to D. Lowering the value of D by 3 points will also make it the case that potential anonymous defectors and potential power abusers will not want to choose D. This can be done by forming a direct aversion to D, or by associating the act of choosing D with negative emotions like guilt and shame.

Punishment and Reward

Another flaw with game theory as a model of morality is that it has a heavily distorted view of the role that punishment and reward play in society.

In game theory, I ‘punish’ you for picking D by picking D myself in the following turn. I will continue to pick D until you start to pick C again. As soon as I notice this, then I will return to picking C.

This is not a good model for reward and punishment. Reward and punishment are not decisions about what to do ‘the next time’ a similar event happens. They are decisions about what to do with respect to the current situation.

Specifically, before the next turn starts, I say to you, “Do not even think about picking D because, if you do, then I promise you that I will inflict three points of negative value on you.”

With this threat in mind, if I choose C, you are now still better off choosing C over D (8 points vs 7). If I choose D, then you are still better off choosing C over D (0 points vs -1). Either way, by means of a threat to do you harm in the current turn depending on your choice, I have coerced you into choosing C no matter what.

Reward is the mirror image of punishment, in the same way that virtue is the mirror image of vice in the section above. Instead of promising to inflict 3 units of harm on you if you should choose D, I can get the same effect by promising you 3 units of benefit if you should choose C. By offering you this reward, if I choose C, you are better off choosing C (11 points vs. 10). If I choose D then you are better off choosing C (3 points over 2).

The issue of reward seems to suggest a complication. Where am I going to get the 3 points? Am I to subtract it from the value of my payoff?

There is no reason to require this. Just as I can find ways to harm you that do not provide me with any direct benefit, there are ways in which I can benefit you without necessarily suffering a cost. Most people value praise – plaques, ribbons, cheers, and other rewards. We do not need to clutter the basic principles that I am trying to illustrate with such things. They can be saved for a future post.

Reward, Punishment, and Power

A doctrine of reward and punishment has some drawbacks that the doctrine of virtue and vice (promoting and inhibiting desires) does not have.

Reward and punishment does not solve the problem of anonymous defection. The anonymous defector escapes punishment. At best, the threat of punishment gives an incentive against choosing D where this can be known, even in the case that there is only one turn to be played.

Reward and punishment also do not solve the problem of unequal power. In the example that I gave above, I used the threat of punishment to coerce you into choosing C. Now, with your choice of C coerced, I still benefit from choosing D over C (10 points vs. 8).

The situation would be different if you and I were in a position of mutual coercion. If you could punish me for choosing D, just as I punish you for choosing D, we would both have an incentive to choose C, to our mutual benefit. However, as soon as one of us has power that the other does not have, then the one with power has the option of increasing his benefit by choosing D, significantly worsening the well being of the one without power.

One conclusion that we can draw from this is the value of a system of checks and balances, where two or more decision makers have the power to hold others in check. As soon as too much power gets concentrated in the hands of a single decision maker, the situation becomes dire for those who lack power.

Conclusion

The main purpose of this post is to illustrate that, on the question of modeling morality, choosing a strategy for winning a repeated prisoners’ dilemma where the value of outcomes is fixed does a poor job compared to choosing malleable desires before entering into a prisoners’ dilemma.

This is not to say that game theory has no value. The study of photosynthesis is not the same thing as the study of morality, but it is certainly a field worthy of investigation. Indeed, game theory can have important implications for ethics – more so than photosynthesis. It may help to determine which desires we have reason to promote and which we have reason to inhibit. Still, something that has implications for morality is not the same thing as morality.

Wednesday, May 23, 2007

Game Theory and Morals

This week, I am devoting my efforts to catching up questions raised through email and comments. One issue that came up in comments was the idea that “game theory” can tell us something about morality. Specifically, some people hold that the principles of morality can be understood in terms of the principles of a strategy that one would use in a particular type of game known commonly as a ‘prisoner’s dilemma.’

Game theory is a complex mathematical model that in some respects is said to provide a meaningful account of the relationship between morality and rationality. Rationality says, “Always do what is in your own best interests.” Morality says (or is often interpreted as saying something like), “Do what is in the best interests of others.” Game theory suggests some interesting ways in which these two apparently conflicting goals can merge.

The most common way of presenting game theory is to use the idea of two prisoners – you and somebody else whom you do not know. You are told the following:

If you confess to being a spy and agree to testify against the other, and he does not, then we will imprison you for 1 year, and execute the other. If he agrees to testify against you, and you do not confess, then we will execute you and free him in a year. If both of you confess, you will both get 10 years in prison. If neither confesses, you will both be imprisoned for 3 years.

If the other prisoner confesses, you are better off confessing – it is a difference between execution and 10 years in prison. If he does not confess, you are still better off confessing – it is a difference between 1 year in prison and 5 years. However, he has the same options you do. If he reasons the same way, he will confess, and you are both doomed to 10 years in prison. If he refuses to confess, and you also refuse, you can get away with 5 years. Clearly, 5 years is better than 10 years. Yet, it requires that both of you refuse to confess, when neither of you (taken individually) has reason to do so.

In trying to figure out how to handle this prisoner’s dilemma, some researchers made a game out of it. In this game, people submitted strategies to use in repeated prisoners’ dilemmas – cases where people were repeatedly thrown into these types of situations.

A particular strategy tended to be particularly stable – a strategy called ‘tit for tat’. The rules here were to cooperate on the first turn and, in each subsequent turn, do what your opponent did in the previous turn. Participants quickly learn the benefits of cooperation, and they do so.

Participants noticed certain similarities between these rules and a moral system – namely, the idea of ‘punishing’ somebody who ‘defects’ as a way of encouraging a system of mutual cooperation. Since then, researchers have thought that this holds the key to morality.

Of course, these reiterated prisoners’ dilemma games do not have death as one of the payoffs – since that would terminate the game at the first defection. They make sure that the payoff for cooperating when the other defects is the worst outcome, but also insist that it is not fatal.

As I see it, the fact that they have to impose this arbitrary limit should be seen as a cause for concern. In fact, the arbitrary and unrealistic limits that game theorists have to put on their games is only one of the problems that I find with the theory.

Altering the Payoffs

First, game theory takes all of the payoffs as fixed. It does not even ask the question, “What should we do if we have the capacity to alter the payoff before we even enter into this type of situation?”

For example, what if, before you and I even enter into this type of situation, we are able to alter each other’s desires such that both of us would rather die than contribute to the death of another person. Now, when we find ourselves in this type of a situation, the possibility that I might contribute to your death is the worst possible option. I can best avoid that option by not confessing. The same is true of you. We both refuse to confess, and thus end up harvesting the benefits of cooperation.

I am not talking about us making a promise not to confess if we should find ourselves in this type of situation. A promise, by itself, would not alter the results. However, if we back up the promise with an aversion to breaking promises – that I would rather die than break a promise that results in your death (and visa versa), then this would avoid the problem.

Desire utilitarianism looks at the prisoners’ dilemma and says that, if a community is facing these types of confrontations on a regular basis, then the best thing they can do is to promote a desire for cooperation and an aversion for defection. This raises the value of the outcomes of cooperation – changing the payoffs – so that true prisoners’ dilemmas become more and more rare.

What about the pattern, which we find in the tit-for-tat strategy – or following up cooperation with cooperation and defection with defection?

Please note that reward and punishment are not the same as deciding whether or not to cooperate or defect the next time that a similar situation comes up. A reward is a special compensation for what happened last time – a punishment is a special payment. We use reward and punishment as a way of promoting those desires that will make prisoners’ dilemmas less frequent.

Uneven Payoffs

Second, one of the assumptions that are used in these reiterated prisoners’ dilemmas – these games – where tit-for-tat strategy turns out to be so effective is that the payoff is always the same. However, in reality, the payoffs are not always the same. Some conflicts are more important than others. If we relax the rules of the game to capture this fact – if we vary the payoffs from one game to the next – I can easily come up with a strategy that will defeat tit-for-tat.

My strategy would be this: Play the tit-for-tat strategy, except when the payoff for defection is extraordinary high, then defect. Using this strategy, I could sucker the tit-for-tat player in to a habit of cooperation until the stakes are particularly high, than profit from a well-timed defection. The tit-for-tat strategist will then defect on the next turn. We will then enter into a pattern of oscillating defections. However, if the payoff on the important turn was high enough, then my gains would exceed all future losses.

My strategy would be particularly useful if, at the time of the big payoff, I arrange to kill off my tit-for-tat opponent so that the game ends on that turn. As I said, game theorists do not allow this option. Yet, in reality, this option is often available.

Anonymous Defection

Third, game theory does not consider is the possibility of anonymous defection. The cost of defection in game theory comes from the fact that, if I defect, my opponent always finds out about it. My opponent then defects against me on the next turn. However, let us assume (as is often the case) that I can defect without anybody finding out about it? I have found a wallet and can take the money without anybody finding out about it. How does game theory handle this type of situation?

Game theory would seem to suggest that I take the money and run. In fact, it says that I should commit any crime where the change of getting away with it, and the payoff, make it worth the risk. It is not just that this would be the wise thing for me to do. It would be the moral thing for me to do. After all, the game theorist is telling us that what game theory says is wise, and what is moral, are the same thing.

This means that anonymous defection is perfectly moral.

Power Relationships

Fourth, game theory presumes that the participants have approximately equal power – that one cannot coerce the choices of the other. Let’s introduce a difference in power, such that Player 1 can say to Player 2, “You had better make the cooperative choice every turn or I will force you to suffer the consequences.” The subordinate player lacks the ability to make the same threat.

When this happens, we are no longer in a prisoner’s dilemma. We are in a situation where the subordinate player is truly better off giving the cooperative option with each turn, and the dominant player is better off giving the defect option. The problem with game theory – or, more precisely, with the claim that game theory can give us some sort of morality – is that it says that, under these circumstances, the dominant player would have an obligation to exploit the subordinate player if it is profitable to do so.

Conclusion

Ultimately, game theory will have something important to say about morality. Game theory provides formulae for maximizing desire fulfillment in certain types of circumstances. As such, it will have implications for what it is good for us to desire.

However, it is one input among many. The idea that morality is nothing more than the rules of game theory has no merit.

Game theory uses the fundamental assumption that if an agent can actually get ahead by doing great harm to other people, then it is right and perhaps even morally obligatory for him to do so. Some game theory seems to suggest such a situation is not possible. Even if that is true, it is still the case that game theory says, in principle, if you should find yourself in such a situation, then by all means inflict as much harm as necessary to collect that reward.

This, alone, gives us irreparable split between morality and game theory.

Unfortunately, as the paragraphs above point out, the assumptions behind game theory morality not only say that a person has a moral right or even duty to great harm to others when it benefits him to do so. There are several likely scenarios that fit this description – scenarios where unusually great benefit, anonymity, or inequality in power can allow an agent to benefit in spite of, and perhaps because of, the harm he does to others.

Whatever morality happens to be, it is not going to be found in game theory.

Tuesday, May 22, 2007

So, you want to be a desire utilitarian

“Not really,” is the response that I imagine.

Okay, pretend that you are somebody who wants to be a desire utilitarian. A natural follow-up question seems to be, “Okay, I’m ready. What do I do next?”

This seems to fit into the theme of this week. A lot of people seem to be asking – in email and in comments, “What do I do?”

Well, here’s what you do. You pick up the tools of praise and condemnation. You apply praise to those who exhibit malleable desires that tend to fulfill the desires of others. You apply condemnation to those whose malleable desires tend to thwart the desires of others. In this way, you nudge people into having more and stronger malleable desires that tend to fulfill other desires, and nudge people into having fewer and weaker desires that tend to thwart other desires. In doing so, focus particularly on the young.

One example of people that fit into the category of people with desires that tend to fulfill other desires is the scientist – particularly those who are working to cure or treat disease or forewarn us and mitigate the effects of natural disasters.

One example of people whose desires tend to thwart the desires of others are those who are intellectually reckless when it comes to subjects that can cost people their lives, health, and well-being. Obviously, these people do not care enough about the welfare of others to take the task of considering these issues seriously. If they did, they would show some contempt for those who use weak arguments to support conclusions potentially harmful to others.

In the former case, it is not enough to say, “I agree with your hypothesis.” Devoting one’s life to defending us from these harms warrants more than agreement, it warrants praise and honor. We certainly have reason to try to persuade more people to adopt the same lifestyle.

In the latter case, it is not enough to say, “Your arguments are weak.” It is more important to add, “How dare you treat this subject so recklessly that you offer this sorry excuse for reasoning when lives and health are at stake! The world would be a far better place with fewer people like you in it, in the same way that it would be a better place without drunk drivers. Please show the moral decency to deal with the subject responsibly.”

Being a Desire Utilitarianism

We are in a habit of treating moral theories like religions. Each moral theory is seen as a sect, looking for followers to sign on and become members. One is a Kantian, or a Marxist, or an (Ayn Rand) Objectivist, or a Utilitarian. This is not much different than being a Christian or Hindu or a Zoroastrian. Each of them, in fact, even tends to have its own sacred text.

One thing that I do not want is to be thought of as providing a sacred text. I am offering a theory of value – a theory that, even if it is the best theory around today, will be replaced by a better theory some day.

[Note: I fear that it sounds a bit arrogant to suggest that desire utilitarianism is the best theory around today. However, it would sound even stranger for me to say that I am defending desire utilitarianism, while holding that it is inferior to some other theory that I have decided not to defend. Of course I think that the propositions that I defend are true and that propositions that contradict them are false. If I did not have those beliefs, then I should not be defending them.]

Anyway, nobody has to actually join a desire utilitarian club.

The Fundamentals

Here you are, a person with desires. You have a desire that P, a desire that Q, a desire that R (and others). As such, you will seek to act to create states of affairs in which P, Q, and R are true. You will act so as to make P, Q, and R true given your beliefs. However, false beliefs can thwart your efforts to make P, Q, and R actually true.

If you can’t make all of these propositions true, then you will seek to fulfill the more and stronger of your desires. If you want P and Q more than you want R, you will choose actions that (given your beliefs) will lead to states of affairs in which P and Q are true, while foregoing (regretfully) R. If you value R more than P and Q, then you will choose actions that (given your beliefs) will lead to states of affairs in which R is true, forsaking P and Q.

I suspect that it would come as no surprise if I should tell you that you live in a universe in which there are other people, and that they also have desires. As it turns out, though some of those desires are fixed by nature, you have the power to mold and modify those desires to some extent. Specifically, you can influence their desires with the judicious use of such tools as praise and condemnation.

You desire P, Q, and R – which means that you have reason to act so as to bring about states of affairs where P, Q, and R are true. By molding the desires of others, you can cause them to act in ways that will be more likely to make P, Q, and R true. You can do this either by causing them to desire P, Q, and R, or by causing them to desire things that will bring about P, Q, and R as a side effect, or by inhibiting desires that will interfere your attempts to realize P, Q, and R.

At the same time, they also have desires, and your desires are as malleable as theirs. If your desire that P is malleable and tends to thwart the desires of others, others have reason to use the tools of condemnation to inhibit the desire that P in you and others. If, at the same time, a desire that S is malleable and tends to fulfill the desires of others, then they have reason to offer praise and reward to encourage in you the growth of a desire that S.

Of course, if a desire that P tends to thwart other desires, then you also have reasons to inhibit in them a desire that P. And if a desire that S tends to fulfill other desires, then you also have a reason to promote in them a desire that S.

These desires that all of you have reason to inhibit in others we can call ‘vices’. These desires that all of you have reason to promote in others we can call ‘virtues’.

Regardless of what club you belong to, you have reason to promote those desires that tend to fulfill other desires, and reason to inhibit those desires that tend to thwart other desires. You will still seek to act so as to fulfill the more and stronger of your desires, and be confounded in their attempts to do so by false beliefs, giving you reason to promote an aversion to false beliefs and the intellectually reckless and dishonest acts that promote them.

Which Beliefs? Which Desires?

Now, this still gives us only a vague answer to questions such as, “Which beliefs are true? Which desires do we have the more and stronger reasons to promote, and how do we promote them? Which desires do we have the more and stronger reasons to inhibit, and how do we inhibit them?”

Even if two people agree fully that desire utilitarianism is the best theory of value, they can still disagree on the answers to these questions.

I have attempted to argue for some desires we have reason to promote or to inhibit. These include:

An aversion to intellectual recklessness. A person should feel (and be made to feel) humiliated to be caught engaging in intellectually reckless behavior. Of course, all of us will make mistakes from time to time. I have made some in this blog. However, a mistake should still be a source of embarrassment, and a redoubling of efforts to be less careless in the future. This particular virtue should be made a key criterion for public office. One thing we do not need is policy makers who are intellectually reckless themselves or who embrace (or lack the ability to detect) intellectual recklessness in others.

An aversion to the use of violence (legal penalties or private violence) as a response to words, or as a response to a political campaign executed in an open society.

A desire to reunite lost property with their owners (proportional in strength to the value of the property – let us not get carried away with returning a penny found in a parking lot to its rightful owner).

A desire to understand those force of nature that threaten to thwart desires on a wide scale, to understand them, and to take action to either prevent them or defend ourselves from the harm they can potentially do. This includes entities such as climate change, viruses, asteroid impacts, and tsunamis.

We have some moral questions that are difficult to answer, where reasonable people can disagree. However, we also have a foolish tendency to ignore what are easily demonstrable wrongs and harms that they cause. In these cases, I think it is time to do a far better job of picking up the tools of praise and condemnation and putting them to work.

Monday, May 21, 2007

Morality at the Moment of Decision

As my project to catch up on answering questions from the studio audience, I have a follow-up question.

Your response to question 4 does not seem to address the problem of making "real time decisions". How can we use desire utilitarianism as we go about our everyday lives, making decisions mostly without thinking about them? For example, if we are walking through the train station, and a woman asks us for $3 to buy a ticket, how can desire utilitarianism help us to make an on-the-spot decision, when there is no time to think about it?

Okay, I can see that. My defense of desire utilitarianism that, “No other system does a better job,” does leave open the question, “Better than what?”

The Moment of Action

At the moment of action, a person will perform that act that fulfills the more and stronger of his desires, given his beliefs. If there literally is no time to think about it, then there is no sense in asking what answer an appeal to desire utilitarianism will generate. I must assume that we have at least a little time to think about it.

Let us say, somebody has 1 minute to ask himself, “What morally-should I do?”

What this means is that the agent will act in about 60 seconds. At the time he will do the act that best fulfills his desires, given his beliefs. One of his desires is a desire to “do the right thing.” Finally, he is a desire utilitarianism, which defines a ‘right act’ as ‘the act that a person with good desires would perform’.

Establishing Context

This decision is going to take place in a particular context, and the context will be important. So, I need to spend some time establishing the background conditions before actually looking at the problem.

In most cases of moral decision making, we do not think about it. A person sees an expensive camera sitting on a wall at a zoo with no discernable owner in site, she picks it up, and she takes it to Lost and Found. The thought of taking the camera does not occur to her, or it occurs to her only as something that some other person – some moral degenerate – might do.

[Note: I will return to the case of giving the $3.00 to somebody asking for money for a ticket. That case is more complex. Before adding complexities, I want to illustrate some important elements with an easier case.]

In fact, most of our moral behavior is carried out without a thought to doing the alternative. We tell the truth without giving a thought to lying. We pay our debts without giving a thought to defaulting on them. We keep our promises without thinking about breaking them.

Because we always act to fulfill the more and stronger of our desires, given our beliefs, this is best explain by such things as an aversion to taking what belongs to others, an aversion to deceiving others, a desire to get one’s debts paid off, and a desire to keep promises, and so on. With such desires and aversions, we are no more likely to select the wrong action than we are to select food we do not like at a buffet.

We all have reasons to promote in others a desire to reunite people with their lost property because, the more and stronger this desire, the better the odds that each of s will be reunited with our lost property. We also have many strong reasons to promote an aversion to deception, a desire to pay off one’s debts, and a desire to break promises – precisely because it makes others more prone to behave in these desire-fulfilling ways.

Here, I would like to note that taking the expensive camera to Lost and Found requires more than an aversion to taking property that belongs to somebody else. This aversion would inhibit the agent from taking the camera. However, it will not cause the agent to go to the effort of taking the camera to Lost and Found. This requires an actual desire to help reunited people with their (valuable) property.

Also, I want to eliminate the option that the agent is motivated by a desire to take lost property to Lost and Found. It is a desire to reunite owners with their property. Assume that, the next day, the agent finds a wallet on the sidewalk. She looks through the wallet, finds a driver’s license with an address that is just up the street. Delivering the wallet to the owner would, in this case, best fulfill a desire to reunite people with their lost property. It would not fulfill a desire to take lost property to a Lost and Found.

This is just some quick notes about what desires are and how they work in situations such as this. Now, let us apply this to the case in question.

The Question to Ask

So, now, somebody is asking for $3.00 for a train ticket. “Should I give this person the money?” I am going to act within the next 60 seconds. What should I do?

Question: Do people generally have more and stronger reasons to praise or condemn those who would give the money?

Note: I am not asking, “Will people generally praise or condemn such a person?” I am asking whether they have reason to. It may be the case (e.g., homosexuality) that people will condemn others when the beliefs behind their condemnation are false, or the desires that motivate the condemnation are bad desires. Sociologists study the question, “Will people condemn . . . .” Ethicists are concerned with the question, “Do people have good reason (true beliefs, good desires) to condemn?”

The answer appears to be ‘Yes’ at first glance. Any person is at risk at finding herself in a situation where a small amount of aid from others could produce a huge benefit. We have reason to have others ready to provide that assistance when we need it – as they have reason to make us ready to provide that assistance.

Note the similarities here between desire utilitarianism’s, “Act on those desires that you can will to be universal desires”, compared to Kant’s categorical imperative, “Act on that maxim that you can will to be a universal law.” Only, desire utilitarianism does not have the messy metaphysics of a Kantian categorical imperative.

However, once we answer, “Yes,” then, the parasites come out. These are people who lie to us, claiming to need money for a ticket that they have no intention to buy. (Note: I am using the term ‘parasite’ specifically for people who misrepresent their situation in order to exploit this disposition to help others.) If we reward deception with cash, then we will simply reward (and, thus, foster) deception.

We have many and strong reasons not to reward dishonesty and similar practices of taking advantage of those we are trying to give a desire to help.

Now, our agent faces two conflicting desires – a desire to help somebody in crisis, and an aversion to promoting deception. These desires would motivate the agent to look the situation over carefully. Does this person asking for money appear to be somebody who is in genuine need of a bus ticket, or is she being deceptive and taking advantage of my good nature?

In other words, desire utilitarianism says that a good person would feel some anxiety at this point, caused by the fact that two important values are in conflict.

By the way, this is significantly different from the outcome that the ‘moral claims are arbitrary’ thesis would generate. If the decision is truly arbitrary, then it does not matter which option one picks – both are equally good or bad. Anxiety comes from the fact that it does matter, and our good agent wants to get the answer right.

If this is a person in crisis, he should give her the money. If she is a deceiver, he should not reward her. He then looks for signs that will tell him which option is the most likely.

If the person asking for $3.00 has a business suit on, talks about an important meeting downtown, and claims that she lost her wallet, we have reason to infer that this is a person in general crisis, and offer assistance (if it is at a small cost).

If the person asking for $3.00 has dirty clothes on and has not washed in a while, we have reason to question whether she is in an immediate crisis. Here, we have reason to direct this person to some institution built to care for the homeless and destitute, where a number of her needs can be cared for at once. They can make an assessment of her need for a train ticket.

I am not talking about being ungenerous here. I would argue that these institutions should be well funded and competently staffed. I am only saying that the aid is best provided through such institutions, than through handouts on the street.

In some cases, it is going to be difficult to tell. These are the cases that, in a good person, would cause the most anxiety – causing him to worry, even for a considerable amount of time after he acted, whether he did the right thing or not.

So, this is what the agent should do. If the likely legitimate benefit to the recipient is high, the likely cost is low, and the likelihood of deception is small, then one should give the money. If, on the other hand, the likely benefit is small, the cost high (you need that $3.00 to get to work), or the likelihood of deception is high, then you should not give the money. There is no easy way to determine whether these factors apply.

Easy Answers

This is not an ‘easy answer’ in many cases. However, where is the rule that says that there must be a clear and easy moral answer to all situations? As I wrote yesterday, I consider it a mark in favor of this theory that it can account for, explain, and predict the fact of difficult moral choices.

Sometimes we have to make an important choice in a situation where we have limited information. This does not imply that the choice is arbitrary. Arbitrary choices imply no difficulty in weighing different options because one can’t get the answer wrong. Difficult but impossible to determine means that it will take effort to determine the right option, and one still has a chance of getting it wrong.

Investments provide a good case study for difficult choices under limited information where the right answer is not arbitrary. An arbitrary choice would be like choosing between two Certified Deposits, both paying identical interest rates over identical time periods. Difficult choices under limited information concerns which mutual fund one should invest a whole company’s pension money in.

The only thing you can do is the best that you can do. However, different options really are better, or worse, than others, and there really is a best you can do.