On the issue of moral resonsibility, I have argued that certain acts are contemptible if done in public, but not so if they are done in private. The difference is that the public act puts others at risk, but the private act does not.
I had used as my example a drunk driver, who can be condemned for driving drunk on the public roads, but not for driving drunk on his own (otherwise unoccupied) ranch.
I compared this to examples of intellectual irrespoinsibility. Irraitonality is to be condemned when we are considering beliefs that put others at risk of harm (e.g., beliefs that govern which laws should be enacted), but not for beliefs that are harmless such as a generic and vague belief that some god exists.
In response to this, a member of the studio audience wrote:
I've often seen you write that Desire Utilitarianism concerns itself with the evaluation of desires instead of acts. With this in mind, shouldn't we condemn the drunk driver even if he were doing relatively little harm in one specific case because the desire to get in a car while drunk is a generally harm-causing (and thus desire-thwarting) desire?
This is true, if our goal is to create an aversion to drunk driving.
However, there is reason to believe that this should not be our goal - that our goal should be something more general where 'drunk driving' would provide merely an application.
What we should be promoting is an aversion to doing that which puts others at risk of harm.
The reason that this is a better objective is because it can apply to a wider range of actions - not just drunk driving. An aversion to drunk driving, for example, will not motivate an agent to better secure a load he will be carrying on the road in his pickup, will not prevent him from carelessly waving around a loaded gun, and will not prevent him from making sure that the food he prepares is clean and healthy.
On the other hand, a more general aversion to being responsible for harm to others will motivate an agent to take care in all of these different circumstances - and in a near infinite list of others, including some we do not even have the ability to recognize and train against until they spring up.
The issue is that we do not want our morality to aim at prooting desires and aversions that are too spedific. We do not want to create an aversion to taking property at gunpoint from a convenience store in Pittsburg from the hours of 11:00 to 12:00 PM On a Thursday, and another aversion to robbing convenience stores in Pittsburgh on a Friday, and so forth. That would be far too much work and, ultimately counter-productive.
Promoting a more general aversion to taking the property of another by means of stealth or threat of violence would be much more effective and efficient than a long list of more specific aversions. A near infinite set of possible times, weapons, and potential victims could be covered in this way.
If our potential drunk driver had an aversion to actions that put others at risk of harm, then that would motivate him to decide against driving drunk where he is at risk of doing harm to others. However, it would not give him a reason not to drive drunk on his own (otherwise vacant) property.
In addition, it would give an agent a reason to make sure that his beliefs are well grounded when he is making a decision that impacts the lives of others, but does not give him a reason to examine those beliefs that have no implications that put others at risk.
A person who does not take care to ensure tht his beliefs are well grounded - the person who rants and wails spewing falsehood after fallacy like a Glen Beck or an Ann Coulter, can then be roundly condemned for their insufficient regard for the well-being of others (as well as their unsufficient regard for the truth).
Now, one can respoind, "Isn't an aversion to irrationality generic enough? It is not unreasonable to think that a generic aversion to irrationality, combined with an aversion to being respoinsible for harm to others, would fulfill the generalness criteria and yet still give us grounds to condemn anybody who has an irrational belief."
The problem with this option is that we are not capable of perfect rationality. We would be condemning people for something they could not hope to avoid.
We do not have time to hold every belief we have up to the light of reason. There is not enough time. We often have to use rules-of-thumb and other heuristics to arrive at beliefs quickly. These answers may not always be right. However, where being right comes at huge expense, it is rational to argue for a method that takes fewer resources and is less reliable, but still good enough and fast enough to get the job done most of the time.
Besides, when we test the rationality of a belief, we have no method available but to hold it up against other beliefs to see how well it fits. Unfortunately, we then have to ask whether those other beliefs have been evaluated and found to be justified. The only way to test them is to hold them up to the light of reason, which means seeing how they relate to belief being tested.
If you discover in your belief set a belief that A, and a belief that not-A, this tells you that your beliefs are incoherent. However, it does not tell you which is right - A or not-A.
We acquire our first beliefs through methods other than by holding them up to the light of reason. We get them by means of authority (who may or may not know what they are talking about). Many of these are false. However, we cannot know that they are false unless we compare them to others. This will tell us that our beliefs conflict, but it will not tell us which conflicting belief to be rid of. Fallible (less than perfectly rational) rules of thum are not only responsible for many of those beliefs to start of, but with determining which to be rid of.
Given that we cannot be perfectly rational it is irrational to assert that we should be perfectly rational. We need to allow that some irrationality is morally permissible - if for no reason than that it is physically necessary. 'Ought' implies 'can', which implies that 'cannot' implies 'it is not the case that one ought'.
One of the ways to make our obligations concerning rationality conform to what is possible is to simply tell people, "Don't worry about beliefs that don't affect others. It is more imporatant that you concern yourself with beliefs that have a potential to put others at risk of harm."
This would include beliefs about abortion, capital punishment, global warming, the effects of health care legislation and tax cuts, and the like. These are areas where we have reason to condemn people - and condemn them soundly - when they demonstrate intellectual recklessnss.
For these two reasons - the value of promoting general desires with a wide range of applicability, and the impossibility of perfect rationality, it makes no sense to condemn people for a failure to be perfectly rational.
Yet, it is still the case that, when we are considering propositions that are relevant to the quality of people's lives, it does make sense to demand that people hold those beliefs up to the light of reason, and to condemn those who are intellectually reckless as we condemn the drunk driver and those who show similar disregard for the interests of others.