Today’s post has to do with moral theory. As such, it is a bit longer than most.
In this post, I am going to discuss:
• The permissibility of killing an innocent child
• Preferential treatment (e.g., favoring one’s own children)
• Acts and Omissions
• Moral Weight vs. Moral Exceptions
• Moral Dilemmas
In an earlier post on “Killing/Capturing Terrorists”, I wrote that no person may kill a neighbor's child even to save his own life or the lives of his entire family. For example, if his only method for preventing their deaths is to drain a neighbor child of blood to extract an enzyme that may save his family, he may not do so.
Yet, I have also argued in discussions that it is permissible to kill an innocent child to prevent a nuclear bomb from going off in a distant city. For this, the scenario I use most often is that of a man with a shotgun seeing a child using a vending machine wired to detonate the distant bomb. Assuming that it is too noisy for the child to hear his shouts, he may shoot the child.
A reader, Eneasz, asked, in effect, where this dividing line is. Is it permissible to kill the child to save 1000 people? What about 1,001?
I conclude that the correct answer is that you may kill an innocent child to save 8,983.6 other lives, provided that at least 0.3 percent of them are physicians. If any physician has a theory on how to cure cancer, he will count for 1,253.3 common individuals. Convicted felons count as 0.03 persons.
Okay, clearly this answer is absurd. More importantly, any moral theory that claims to give answers of this type would thereby reduce itself to an absurdity. Morality simply does not work this way.
Basic Desire Utilitarianism
Desire utilitarianism holds that we use praise, condemnation, reward, and punishment to promote good desires and suppress bad desires. "Good desires" and "bad desires" in turn are desires that tend to fulfill or thwart other desires. The common example that I use says that true beliefs are important to desire fulfillment, so an aversion to deception and intellectual recklessness is a "good desire."
On this theory, a good act is that act that a person with good desires would perform. A vicious (evil) act is an act that a person with good desires would not perform. We assess the act of killing the child on whether it evidences good desires (desire that we have reason to use praise, condemnation, reward, and punishment to promote). So, we praise, condemn, reward, or punish the person who kills the child according to whether we seek to promote or discourage the desires evidenced by his action.
Note that a right action does not actually have to be caused by good desires. It simply needs to be that act that a person with good desires would perform. A person can report a child rapist to the authorities, not because he cares anything about children, but because the child rapist is a business competitor and he wants to rid himself of the competition. Yet, reporting the child rapist is still the right act because it is the act that a person with good desires (an interest in the welfare of children) would perform.
Killing an Innocent Child
So, the moral question is: When will a person with good desires kill an innocent child?
One factor that we must consider in answering this question is: How frequently will this come into play? We have good reason to psychologically mold people into performing the best actions in every-day circumstances more than for exotic scenarios that are never going to happen. We have to consider the question, "How many times will an average person face a situation when he will have to kill a child in order to prevent the detonation of a distant nuclear bomb?" If this is expected to never happen, then we do not need to be devoting a lot of energy into molding the person's desires for those types of situations.
How often can a person save the life of somebody in his family by killing a neighbor's child? If we count the possibility of acquiring useful organs, beliefs (though false, at one time widely held) that a human sacrifice could please the gods, using the neighbor's child as an innocent shield against a would-be attacker, or using him as a guinea pig for medical experiments, the question of when one may kill a neighbor's child is, indeed, an everyday question.
Answering this everyday question brings up the issue of, "If you may kill my child to save the life of somebody in your family, then I may kill your child to save the live of somebody in my family. If I may do so, then I may kill you or your child to prevent you from using the death of my child to save somebody in your family." We end up with a situation with a great deal of desire-thwarting going on.
The rational option is to adopt the position, "We are going to use our tools of praise, condemnation, reward, and punishment to promote an aversion to killing children. We shall seek to make this aversion strong enough to prevent people from killing their neighbor's children in all every-day situations in which it might be useful to do so. Instead, we will create a society of individuals who can accept the fact that the fates themselves get to decide who lives and who dies."
This aversion to killing children will not affect an agent's desires when it comes to saving children in trouble. Even as we use praise, condemnation, reward, and punishment to promote an aversion to killing other children, we allow parents to have a special affection for their own children. This means that a parent, faced with a choice of rescuing his own child or rescuing his neighbor's, can still have a stronger desire to rescue his own children than he has for rescuing his neighbor's, and favor his own children in these circumstances.
Preferential Treatment
This brings up a related issue. Can desire utilitarianism defend the idea that parents can give preferential treatment to their own children?
There are two avenues that we take here.
First, desire utilitarianism is only concerned with malleable desires -- desires that can be molded through praise, condemnation, reward, and punishment. It makes no sense to call for using these tools where they have no effect. Thus, desire utilitarianism has a built-in place for the principle that 'ought' implies 'can' and 'cannot' implies 'it is not the case that one ought'. If parental affection is a natural (evolved) disposition that is not subject to change, then those who display it cannot be subject to moral condemnation.
Second, assigning 'favorites' is a recognized way of promoting the general welfare. A company will assign a vice-president to each of several regions. For example, there might be a Vice President for the Pacific Northwest. Each individual is expected to favor his own district. This Vice President will not be expected to sacrifice $5 in sales to bring about a $10 increase in some other region. He is expected to maximize sales in his own region. He may be prevented from harming sales in other regions, but he need not have a special affection for promoting sales in other regions.
The model for assigning vice presidents to regions can be used to justify assigning specific children to specific adults. "Your job is to take care of this child. Focus your attention on his welfare. Other adults will focus on the welfare of other children." So, we use our tools of praise, condemnation, reward, and punishment to promote this type of favoritism -- this type of special consideration for those things that become the responsibility of the person to whom they are assigned.
A blend of these two considerations is probably closer to the truth. We combine a natural affection that parents have for their own children with the moral benefit of assigning distinct responsibilities to distinct individuals.
On this basis, a President can show a preference for the welfare of the people in his own country. However, this preference comes with moral limits on what he may do to the people of other countries in executing this responsibility -- just as a parent's responsibility for the welfare of his children comes with limits on what he may do to the children of other parents.
Moral Weight and Moral Exceptions
Another set of moral concepts that we will have to take into consideration is the distinction between moral weight and moral exceptions.
An example of moral weight can be found in the case of a father who is out fishing with his child when the kid gets stung by a bee. Imagine the kid having an allergic reaction and the father's car will not start. There is another car nearby. The owner has left his keys in the car. Therefore, the father takes this car to get his kid to the hospital. The father's duty for the welfare of his son outweighs the wrong of taking the car.
An example of a moral exception can be found in the aversion to killing others. In fact, we do not promote a blanket aversion to killing others. Rather, we promote a moral prohibition against killing others except when the person killed is actually threatening significant harm. We are not, in this case, weighing the attacker's right to life against the victim's right to life. Rather, we say that the attacker has sacrificed or given up his own right to life, and may therefore be killed with moral impunity.
In matters of moral weight, the agent is expected to feel regret and remorse over his actions. He is expected to have an attitude comparable to, "I'm sorry, but there was nothing else I could do. If I had not taken your car to get my sick kid to the hospital, he would be dead." However, in the case of moral exception, no apology or residual regret is expected. The person who kills an attacker owes nobody an apology.
To the desire utilitarian, a case of moral weight is one in which two "good" desires come into conflict. A father is expected to have a desire to take care of his child. He is also expected to have an aversion to taking the property of another. In the scenario above, the two desires come into conflict. Desire utilitarianism states that the aversion to borrowing property should be weaker than the desire to save the life of one's child. So, the father may take the car. However, the outweighed aversion still exists -- still should exist, and it leaves an emotional residue. This aversion is the source of the regret and remorse that lingers as a result of his actions.
The case of an exception recognizes the fact that desires can be complex. Desires can be as complex as the propositions that make up their object. As a result, it is possible to promote a desire that, "I not kill anybody who is not aggressively threatening others with immediate harm." This type of desire allows a person to kill in defense of self and others without the slightest twinge of regret or remorse.
Once again, I remind the reader that it makes little sense to design morality for strange and exotic situations. We have enough to do in using praise, condemnation, reward, and punishment to create the desires and aversions that will serve people in average, every-day circumstances. These every-day desires might have strange implications when an agent finds himself in a highly improbable situation. That is simply a fact of life.
Moral Dilemmas
So, let us go back to the case of killing the innocent child to save others.
The principles that I identified above state that we are going to use praise, condemnation, reward, and punishment that will serve people well in average, every-day events. Those desires and aversions will include an aversion to killing others (except when the 'other' is an attacker and the killing is in defense of non-aggressors). The aversion to killing will be stronger than the aversion to letting die. We will expect people, for example, to have a lower aversion to letting their child die than to killing a neighbor's child to save their own.
However, when the number of people that we would be letting die gets exceptionally large, a person with good desires can find himself in a situation where compassion for others outweighs his aversion to killing.
This is certainly a case of weight, not a case of exceptions. The person with good desires who kills an innocent child to prevent a bomb from going off is going to feel absolutely horrible about it. The thwarted aversion to killing a child will cause him to constantly go over the incident and wonder if there was anything else he could have done. The two desires -- the aversion to killing the child and the desire to save lives -- would have tried to find an option that would have fulfilled both desires, and forced a choice only if such an option could not be found.
There is no specific point at which the aversion to letting lots of people die will outweigh the aversion to killing an innocent child. The every-day world in which we live simply does not provide us with an opportunity to fine-tune these considerations. In the everyday world in which we live, we are (or should be) concerned only with promoting an aversion to killing the innocent and promoting a somewhat weaker aversion to letting die that will prevent killing in common everyday circumstances.
Desire utilitarianism holds that moral dilemmas truly exist. They can be found in cases where a person is in a situation where all possible actions will thwart a strong desire that a good agent would have. The parent who has to decide which of her two children she will allow to be killed (otherwise both will be killed), or the person who must kill their own child to save a city from destruction, are instances of moral dilemmas. The good person will be terribly torn over these options. In the case of a true moral dilemma, the conflict will likely be psychologically destructive.
Moral Agony
We can draw one more implication that will further illustrate this system. We have talked about the person who must kill a child to prevent a distant bomb from going off. Let us add the complication that the child he must kill is his own child. Here, we allow that a person should find it easier to kill a stranger's child to prevent the nuclear explosion than to kill his own child. In fact, I suspect that we may even forgive the person who is simply unable to kill his own child. The moral dilemma -- the conflicting desires -- may psychologically destroy him, but he is simply unable to find the will to kill his own child to prevent the detonation of the bomb.
The preference for one's own children will have an effect in the exotic and unlikely circumstances that everyday morality simply does not prepare us to handle. In this case, it means that he must have more lives on the balance to bring himself to kill his own child than it will take to kill a neighbor's child.
Imagine a movie scene where Character1 is in a position where he must kill his own child to prevent a horrible act. He cannot bring himself to do it. However, he is able to stand by and do nothing while Character2 kills his child. He breaks down at the end due to the loss, but he can let this happen.
In the realm of desire utilitarianism, this is a perfectly understandable and moral option. Morality aims at controlling our actions in decisions that we make every day. It yields sometimes extremey unpleasant results in exotic circumstances that we all have reason to hope that we can avoid. There are situations where even the good person -- particularly the good person -- will find it hard to live with the choices he must make.
I must say I was a little suprised by your suggestion that the number of lives saved necessary to justify an innocent death would be large.
ReplyDeleteIn the classic example of whether you would throw a switch on a railway line to send a runaway train into 1 person rather than the 5 it is heading for, I would (I think) throw the switch every time. Unless the 1 was my child, I suppose.
Of course this is not an everyday situation. Yet the example is relevant if we see it in terms of the question of the difference between action and inaction. Supporting or opposing a particular war is unfortunately, an everyday moral choice. Some opponents argue that you should not start a war even if there would be fewer people killed as a result - you should avoid deaths that are your responsibility more than deaths that aren't. This is a good everyday moral principle, but is it a good political-moral princple? If starting the war is like throwing the switch, are you responsible for inaction as much as action?
(I am not actually agreeing with the calculation that net lives saved were a reasonable expectation for any particular recent wars, or suggesting that this is the only consideration.)
Joe Otten
ReplyDelete(1) Would you pull the switch for the trolly car? It is one thing to imagine what you would do in the comfort of your computer. Actually causing a person's death is another matter entirely.
(2) You still need to deal with the distinction between moral permission and moral obligation. Even if it is permissible to throw the switch; is it obligatory? Would it be permissible for a person not to throw the switch? Why or why not?
(3) The trolly car examples have to be held up in contrast to another set of examples commonly used in ethics -- doctor cases.
You are a doctor. You have eight patients in Intensive Care who each need a pint of blood with a particular enzyme or they will die. You have none in stock. You cannot get any. However, as it turns out, the bloodwork for a patient who has come in for a routine physical has the enzyme. To get eight pints of blood out of him, you must kill him. Is it permissible to kill him? Is it, perhaps, permissible but not obligatory?
What if, instead of being able to save eight people, you can save 100 (each of which only needs a couple of ounces of the blood with the enzyme?)
What if you could save 10,000 (each of which only needs a few drops of the blood with the enzyme)?
At what point does it become permissible to kill the healthy patient and take his blood?
The trolly car case is actually quite close to a case that can happen in real life. You are in your car. The breaks fail. At the bottom of the hill, there is a crowd of people. In this type of case, it is perfectly permissible, even obligatory, to aim the car where you will hit the fewest number of people. Indeed, many would consider it obligatory to direct the car so that it taks out as few people as possible. (Preferably, zero; however, we are assuming that this is not an option.)
I hold that morality aims at molding desires to handle every-day circumstances. In every-day circumstances we want drivers whose breaks fail to "do least harm." However, we do not want doctors walking up and down the halls of a hospital deciding to kill some patients to save others -- we wish to leave these decisions up to fate.
These "exotic stories" are simply applications of morality designed to handle every-day events taken entirely out of that context.
So your argument is that it is absurd to suppose, in any given instance, that it can be determined whether one act will fulfill more desires than another; but that nonetheless humans are blessed with a knowledge of what sorts of things tend to fulfill more desires than others across an indefinite number of cases? If an epistemology of the former particular case is absurd, why is an epistemology of the latter universal case not infinitely moreso?
ReplyDeleteAnonymous Let me take your points in order.
ReplyDeleteSo your argument is that it is absurd to suppose, in any given instance, that it can be determined whether one act will fulfill more desires than another…
Actually, my position is that it does not matter whether or not this is absurd. An agent can do very little with this information even if he had it. Agents always act so as to fulfill the more and the stronger of their own desires given their beliefs. (They always seek to act to fulfill the more and the stronger of their own desires, but false or incomplete beliefs may cause them to fail.)
This means that the only person who can flawlessly do anything with this knowledge is the person who has only one desire -- a desire to fulfill the most and the strongest of all desires. There is no such creature.
Here, the moral principle of 'ought' implies 'can' means that the moral concept of ''ought' (as in 'ought to fulfill the most and the strongest of all desires that exist') is inapplicable in the same sense that 'ought' (as in 'ought to teleport the child out of the burning house') is inapplicable.
Here is where I get the claim that moral concepts only apply to malleable desires (desires that can be changed through social conditioning such as praise, condemnation, reward, and punishment).
…but that nonetheless humans are blessed with a knowledge of what sorts of things tend to fulfill more desires than others across an indefinite number of cases.
I believe that we can make reasonable estimates as to whether a desire, if universal, would tend to fulfill or thwart other desires. A specific instance of torture may fulfill more desires than it thwarts. However, a universal desire to torture is far more likely to lead to the thwarting of desires than the fulfilling of desires. At the very least, we can know with absolute certainty that the only society where all desires can be fulfilled is one within which nobody has the desire to torture.
Also, note that it is far easier to predict the location of the center of population for the United States 10 years from now than it is to predict the location of any given person. My guess is that the center of population will not move more than a few miles from its current location. However, it would be significantly harder to predict the location of any specific individual. His individual movements will have very little impact on the total.
However, even here, we have to ask what a person can do with this knowledge. He can know that a particular desire, if universal, would tend to fulfill other desires. However, he still cannot do anything but act on his own desires (given his beliefs).
Each individual has reason to promote in others desire-fulfilling desires and to inhibit in others desire-thwarting desires. This is motivated in part by the recognition that the person will desire-fulfilling desires will tend to fulfill his own desires and those of the people he cares about. The person with desire-thwart others will tend to thwart other desires including those of the agent. A person's interest in the fulfillment of his own desires is, in itself, motivation enough to promote desire-fulfilling desires and inhibit desire-thwarting desires.
That knowledge is not perfect and precise, but it needs to be. A driver who is trying to keep his car on the road does not have to know the precise effect of every turn on the wheel. A child riding a bike does not need to know the precise scalar value of every force that is acting on him while he peddles his way home from school. Neither does the ethicist need to know the precise value of every moral calculation. He only needs to judge, "We need to head in that direction." As in, "We could use more respect for truth and for due process whereby guilt is proved before a person is pronounced guilty."
This method contains a feedback mechanism that I have discussed in the past. Person A uses social conditioning to create desire-fulfilling desires in Person B. Person B also seeks to fulfill his desires, so he uses social conditioning to create desire-fulfilling desires in Person A. However, Person B has desire-fulfilling desires. So, he conditions Person A to help fulfill his desire-fulfilling desires. This causes Person A to acquire desire-fulfilling desires. Person A, in turn, is still seeking to promote in Person B those desires that will help Person A fulfill his desire-fulfilling desires.
One issue here is that we do not need any specific motivation to promote desire-fulfilling desires. Insofar as a desire aims for fulfillment, it provides motivation to create in others “desire-fulfilling desires.” Our aversion to pain gives us motivation to create in others an aversion to causing pain. The desires we have that we must be alive to fulfill gives us motivation to create in others an aversion to killing and a desire to protect each other from killers.
None of this is particularly difficult. There is room for error -- but there is room for error in all things. We can generally recognize those desires that tend to fulfill or thwart other desires. We can then put the tools of social conditioning to work promoting the most obvious desire-thwarting desires, and inhibiting the most-obvious desire-thwarting desires.
Alonzo,
ReplyDeleteThe distinction between permission and obligation depends on a clear distinction between action and inaction. The latter distinction is not always clear. One can imagine thought experiments which consider a possible duty to 'get out of the way'.
The doctor examples manage to make the prospect of saving the greater number of lives horrify us. How do they do that? I suspect it is because we are horrified by the prospect of not being able to trust doctors. And indeed were doctors to actually behave like this most people would avoid them at all costs, forgoing most opportunities for treatment. This would be more desire-thwarting overall.
But anyway you didn't answer my question regarding "just war". This seems to be a critical question of, in a democracy, everyday morality.
Suppose we are offed a hostile ultimatum - offering few deaths and submission to a dictatorship. The alternative is war. Accepting the ultimatum is arguably non-action. Does that make it preferable?
Jon Otten
ReplyDeleteYou need to explain to me how a distinction between permission and obligation depends on a distinction between action and inaction.
An action may be permitted or obligatory. Refraining from action may be permitted or obligatory. One distinction seems to cut across the other, not run parallel to it.
On another of your issues, I agree that the argument against "killing patients" is specifically tied to the fear of being killed. If we had an institution that allowed doctors to kill patients at will and without consent, then people would be afraid to go to the hospital, and the whole institution of medicine would suffer.
As for the issue of a "just war", I do not see the issue as having as much to do with the number of lives lost.
If you were home alone with your child, and 10 or even 100 thugs came by with an intent to kill you and your child, you would be permitted to kill all 100 of them. This is because of the moral prohibition on killing those who are not violent aggressors. Yet, a cop's permission to capture violent aggressors does not give him permission to deliberately kill even one innocent person in the pursuit of those duties. (They may die as an unfortunate side effect of the action -- such as those who die when rescuing hostages or a high-speed chase that results in an accident, but innocent non-involved parties cannot be targeted for death.)
Whenever you have a just war, you have violent aggressors. When it comes to stopping violent aggressors, there is much more at stake than the number of lives lost in this particular action. There is also the benefit of teaching a lesson against becoming a violent aggressor by making sure that violent aggressors do not profit from their actions. This, I would argue, is the actual justification for a just war.
Alonzo,
ReplyDeleteI guess you're right about the distinctions. I can't remember why I thought that. I was probably thinking that if action is superogatory then inaction is permissible and maybe vice versa.... and slipped up.
Of course I agree that number of deaths is not the only consideration. But the question is, sometimes, whether all aggressive behaviour is worth punishing, when innocent lives will be lost along the way. And when it is not widely agreed whose responsibility it is to punish aggressive behaviour.