Monday, October 31, 2016

I Was Wrong: On the Aversion to Harming the Innocent

301 days until the start of class. That’s 43 weeks.

4 days until I introduce myself to the Philosophy department.

I have been busy rewriting my paper defending a moral aversion theory of punishment – turning it into something that reads more as an academic paper than a letter to the author. I still like the idea of distributing it among the attorneys I know and see if any of them have any interest in the philosophy of punishment. That is the audience that I am keeping in my head as I go through the current rewrite.

Another potential use is that graduate students in the PhD program must turn in a qualifying paper – up to 8000 words. The current paper is just shy of 7000 with one section left to write – so I think it will work.

Now, I have to confess something.

I was wrong.

In posting my initial thoughts on this paper, I wrote that the aversion to punishing the innocent was a special application of the aversion to harming the innocent. However, on further thought, this is not the case. It cannot be the case because there is no “aversion to harming the innocent”.

We routinely perform actions in our lives that are harmful to others – in the sense that it sets back their interests. In some cases, we inflict significant harms on others without giving the fact another thought.

A primary example is that of competing with others for a job. We apply for a job, we get it, and another person who is applying for the same job has his interests set back significantly. The harm to his financial well-being, as well as to his relationships and potential for growth, could be significant. In fact, the person who we beat out for a job may well be willing to endure a physical assault that he recovers from in a week or two if it meant that he also had the job. Beating him up is illegal, but winning the job from him is not. More to the point, beating him out for the job isn’t even wrong – though it does cause him some harm.

Other examples of wrongless harmdoing include defeating another person in a game – particularly a game that has a large cash award. It includes opening up a business that will compete against a local business. The new competitor can be as destructive to the existing business as an arson’s fire. Yet, not only is this legal, it faces no moral objections at all.

Otto – a company that makes driverless cars – recently tested a driverless 18-wheeler that delivered a load of beer in Colorado. As this technology develops, one of its effects will be to put a number of truck drivers out of work. This is a harm inflicted on innocent people – but it is carried out without moral objection. Whereas taking $100 out of their wallet would result in a charge of theft that could result in significant jail time.

We do have good reasons to allow people to harm the innocent in some ways but not others. Lying, vandalism, rape, and murder seldom produce overall good consequences. Innovation as well as competition in sports and business tend to produce significant benefits. Consequently, we have reasons to cause people to form aversions to the first type of activity, but not the second.

It is also relevant to note that we do not, in fact, promote an aversion to harming the innocent. We promote aversions to certain types of actions that tend to be harmful. We promote aversions to lying, breaking promises, vandalism, theft, assault, rape, and murder, to name a few. The considerations above suggest that an aversion to punishing the innocent belongs with this set.

For these reasons, I must take back the claim that the aversion to punishing the innocent is a special case of the aversion to harming the innocent. That wasn't accurate. Instead, the aversion to punishing the innocent sits on its own, alongside the aversions to lying, vandalism, and the others.

Friday, October 28, 2016

Rules Against vs. Aversions To Harming the Innocent

304 days until classes start. 7 days until I introduce myself to the department.

Yes, I get nervous.

I had a realization this morning. I have an association with a large number of lawyers. A number of those lawyers will probably have an interest in a philosophical argument concerning the justification of punishment, and even have some familiarity with the literature. What I realized is that I can present the arguments I am writing up in response to Boonin to them for comments and criticisms.

Anyway, to continue with my response to Boonin's book.

In Part I I, presented a moral aversion theory of punishment.

In Part II I, looked at Boonin's definition of punishment and showed how moral aversion theory was a theory of punishment on his definition.

In Part III, I presented the consequentialist theory of punishment and some of the major problems with it - particularly the problem that it sometimes justifies punishing the innocent when it can do the most good.

In Part IV, I discussed the details of a moral aversion to harming the innocent.

Now here, in Part V, I defend this moral aversion theory from objections that Boonin raises against rule-utilitarian theories, which he (incorrectly) asserts are just as applicable to motive-utilitarian theories.

Rules Against vs. Aversions To Harming the Innocent

Boonin brings up several objections to a rule-utilitarian defense of punishment that he claims would be just as applicable to motive-utilitarian theories. Moral aversion theory, where aversions, rather than rules, are justified by their utility would be an example of a motive-utilitarian theory.

One of these objections says that utilitarianism sometimes justifies punishing the innocent. He illustrates this point by asking us to imagine setting up a secret committee that would be charged with determining when it would be useful to frame and punish an innocent person. Upon deciding that, perhaps due to a series of crimes, it would be useful to frame a person for those crimes and have a very public trial and punishment, it would then set events in motion to bring about that end. It would find somebody to frame, frame that person is a very public trial, punish that person, and thereby create sufficient deterrence to reduce the overall number of crimes. The social benefit will outweigh the social cost when following those rules.

This suggests that the rule utilitarian best rule is not, “Do not punish the innocent.” It is, “Do not punish the innocent unless it is deemed useful to punish them by a secret committee that determines beyond a reasonable doubt that it will improve the overall public good."

We may have to flesh out these rules with a few more details to make then work, but it seems likely that there is some set of rules that would sometimes justify punishing the innocent for utilitarian reasons. Whatever those rules happen to be, rule-utilitarianism would then be charged with justifying something (framing an innocent person for a crime and punishing him) that is, itself, objectionable.

This is one of the objections to rule utilitarianism that Boonin asserts is just as applicable to any type of motive-utilitarian theory. Insofar as punishing the innocent is sometimes useful, there must be some set of motives that would have us punish the innocent person in those circumstances where it is useful. A flat prohibition on punishing the innocent would not likely be the best option.

However, motives and rules have some significant differences – particularly when the motives we are talking about are moral aversions.

First, motives are persistent over time. You can easily create a rule to not eat fish except on the day of the full moon. However, it is much more difficult (if it is even possible at all) to not LIKE fish except on the day of the full moon. You can have a rule that expires at midnight on December 31, but it would be nearly impossible to have an aversion that expires at midnight on December 31. Comparably, our aversion to harming the innocent will not easily allow for exceptions where we can simply "shut it off because, in this instance, punishing the innocent will produce good consequences". The aversion will persist through those instances, providing a reason not to harm the innocent - and to not harm them BECAUSE they are innocent.

Second, we can easily create complex rules where we cannot easily create complex motives. While it may be possible that a person could be afraid of spiders except in her Aunt Jane's house or except for Stanley's pet spider Jake, it is very difficult to intentionally engineer aversions with these exceptions. Where the ease of making a rule may make it worthwhile to do so, creating an aversion to match the rule may require far more work than it is worth. Furthermore, it seems safe to say that our most complex rules can be far more complex than our most complex socially engineered aversions.

Third, moral aversions are taught in a society to all individuals. One of the defining characteristics about moral principles is that they are universal – they apply to all people. That same defining characteristic applies to moral aversions. Insofar as they are moral, they are aversions that everybody should be caused to have. One can have a secret rule creating a secret committee that determines when innocent people may be framed for a crime and punished for the public good, but this would require that everybody lack the aversion to punishing the innocent when they are on such a committee.

Fourth, creating a universal moral aversion requires that it be a type of aversion that it is possible to teach universally (or very nearly so). It would have to be something that could be taught to a variety of people with a variety of experiences and a variety of intellectual capabilities. It will be far easier to teach an aversion to punishing the innocent than it is to teach an aversion to punish the innocent except when one thinks it will promote the public good.

Fifth, rules can be broken, while aversions can be overridden only when a stronger motive or a combination of motives override it. Boonin talks about a rule in baseball that a runner must stay in the baselines when going from one base to another in order to score a run. He provides an example where the runner breaks the rule when he sees a child choking on a hot dog and he alone seems to know the Heimlich maneuver. He thereby breaks the rule. However, he can only do so intentionally if he has a motive for doing so. We then need to look at that motive - at its expected consequences, and whether or not people generally have reason to promote or inhibit it. The player can step outside of a set of rules, but he can never step outside of a set of motives.

We need to examine the aversion to punishing the innocent in light of these differences.

It is a simple aversion to teach, and it can be taught publicly and universally.

Once taught, it will persist in the motivational even as other concerns weigh against it. It will continue to motivate the judge who has an opportunity to punish an innocent person for the public good, and a legislator considering establishing a committee to identify opportunities to punish the innocent for the public good. They will each have an aversion to punishing the innocent precisely because it is punishing the innocent such that the public good has to be very great indeed for the agent to even consider such an act.

Furthermore, the judge and the legislator will know that people generally have many and strong reasons to condemn, and even to punish those who harm the innocent, as a way of promoting this aversion. Indeed, the judge and the legislator are among those who have many and strong reasons to promote that aversion – to condemn and even call for the punishment of those who would punish the innocent. This would mean self-condemnation if they were to engage in such an act.

This aversion to punishing the innocent may allow anybody to punish the innocent if the punishment is small - and we do, indeed, allow it when it involves displays of anger and condemnation in words and tone alone. However, it would motivate agents to take steps - and to require that others take steps - to make sure that the individual is guilty if the harm is great. It would motivate agents to require the use of a set of practices that would include presenting evidence for and against to an impartial jury. It would motivate an aversion to allowing each individual to decide on and carry out punishment because we know from experience that a lot of people make mistakes in this area. Furthermore, in the same way that we have no reason to trust their judgment regarding the execution of private justice, they have no reason to trust ours.

An aversion to harming the innocent even provides motivation to punishing the guilty – since that is the method by which we activate the reward centers of the brains of the person being punished specifically, and people generally, to create the aversions that protect the innocent from harm.

Can we not do better by promoting an aversion to doing harm, rather than an aversion to harming the innocent?

The question arises here as to how we would do that. If we are denied the use of the tool of either creating a state of affairs to which agents are averse, or denying what they desire, as a way to activate the reward centers of the brain and create the aversion, then how is the aversion to be created?

I have mentioned that we can create these aversions with stories and parables. However, it may well be the case that at least the risk of real harm is necessary to fully trigger the desired effect. One has to think that people can actually be harmed for performing such an act. Furthermore, they would have be stories in which people are intentionally harmed because they intentionally harmed others, and it would be very difficult to create a coherent story that met that criterion.

In short, motives are quite different from rules. Those differences protect a motive-utilitarian defense of punishment from arguments that would be effective against a rule-utilitarian theory. It explains and justifies a simple, public, universal aversion to harming the innocent. It explains and justifies institutions such as the jury trial and practices such as the presumption of innocence. It is consistent with the fact that, if there is enough at stake, it may be legitimate to punish the innocent (e.g., to save humanity from extinction). At the same time, it still asserts that agents always should be averse to harming the innocent because they are innocent, and even when harming them can promote some other good.

Boonin provides some other arguments against rule and motive utilitarian theories that are applicable to this moral aversion theory. In my next post, I wish to examine some of those other objections.

Wednesday, October 26, 2016

A Moral Aversion to Harming the Innocent

306 days until the first day of classes.

Motivated by comments made by David Jocquemotte, I have made a couple of attempts to get my previous post, Sam Harris and Peter Singer's External Moral Reasons out in the world a bit - so that those authors and their followers can at least take a look at and assess the objection.

In other news, I learned in an email from Dr. Boonin that he has not been following the literature on theories of punishment since he published his book. Which is too bad. It means he would likely not have an interest in reading my response either. But, I will finish the argument.

Which I should get to presently.

Let's see now . . . where did I leave off?

In Part I I presented a moral aversion theory of punishment.

In Part II I looked at Boonin's definition of punishment and showed how moral aversion theory was a theory of punishment on his definition.

In Part III I presented the consequentialist theory of punishment and some of the major problems with it - particularly the problem that it sometimes justifies punishing the innocent when it can do the most good.

Now, here, in Part IV, I will look at how moral aversion theory begins to tackle the problem of punishing the innocent.

Punishing the Innocent

Standard utilitarian arguments have a problem in that they justify punishing the innocent. They look only at the effects of punishment - not at the punishment itself (except at the harm contained within it). The guilt of the accused has relevance in the justification of punishment only insofar as it has relevance in determining the effects of punishment. The objection, “But he is not guilty!” can be set aside with, “So what?”

Moral aversion theory touches on this problem first by noting that the way to generate a moral aversion is to punish and condemn people for the types of actions that one wants people to have an aversion to performing. To create an aversion to lying one condemns liars. To create an aversion to theft one condemns and punishes theft. The punishment must be about that which one wants people to form an aversion to. It makes little sense to hold an innocent person's hand in a fire until everyone else is the room acquires an aversion to romance novels.

However, one can still obtain the benefits even of promoting moral aversions by punishing someone that people believe committed the type of actions that one wants to promote an aversion to. 

On the other hand, if one is going to invent a fiction to promote a moral aversion then one might as well make it a fiction through and through. A parable, novel, or book in which the villain is punished - or even suffers misfortune - because that villain committed a particular type of action would have the same effect without harming an innocent person. One could respond that specific harm may be particularly useful in bringing about specific deterrence, but if the accused is not guilty, there is no need for specific deterrence. 

This is going to bring up a question, “Why punish at all? Why not just use stories?” This is an important question, and I will return to it later. 

For now, I want to turn to a more direct answer to the problem of punishing the innocent. 

The critic of utilitarianism tells us a story about punishing the innocent. We find this intuitively wrong. What is going on when we find this to be intuitively wrong?

I suggest that this moral intuition is, in fact, an a moral aversion expressing itself in our feelings about a state in which an innocent person is harmed. Moral aversion theory then takes this moral aversion - this intuition – and asks what it means to have such an aversion and whether we should keep it. 

Technically, I would argue that what we have is a moral aversion to harming the innocent and, since to punish the innocent is to harm that person, this implies a moral aversion to punishing the innocent. 

What does it mean to have a moral aversion to harming the innocent? 

Having a moral aversion means that we are going to see the fact that a person is innocent as a reason not to punish that person. This reason is an end-reason. That is to say, our reluctance to punish the innocent person is not grounded on knowing that doing so will thwart some others desire (e.g., maximizing human flourishing). It means avoiding a state of punishing the innocent simply because because it is a case of punishing the innocent. This is true in the same way that a decision not to eat broccoli for somebody who has an aversion to eating broccoli is something that the person avoids for its own sake, not because eating broccoli will thwart some other desire or interest. It means avoiding a state of eating broccoli because it is a state of eating broccoli. 

Having a moral aversion to punishing the innocent also means that the agent has a reason to avoid eating broccoli even if it doing so would otherwise create a better world. A world that is better when we ignore the fact that it contains an instance of harming the innocent becomes worse when we add that fact to the mix. It means that the agent who is considering the option of punishing the innocent has a reason to look for another alternative that does not contain an instance of punishing the innocent - and to take it - even if the world in which the innocent person is punished - when we ignore that fact - would be a better world. It means accepting an inferior outcome when the otherwise superior outcome requires harming the innocent. 

However, it also means that if the other concerns are weighty enough, they may outweigh the aversion to punishing the innocent. For example, put the  survival of humanity sits on the scale opposite that of punishing the innocent, then the agent may overcome the aversion and punish the innocent. However, that will not end the foul taste that would come from harming the innocent to realize this important end. The health benefits of broccoli might give us a reason to eat it in spite of its taste, but it will not make broccoli taste good. The benefits of saving humanity from destruction will not make the fact that one had to punish an innocent person taste good either.

An aversion to harming the innocent is consistent with having no aversion to harming the guilty. However, the aversion would generate a motive to make sure that the person being punished is not innocent. This would motivate such institutions as a trial by jury and a presumption of innocence that must be proved beyond a reasonable doubt. 

What does it mean to keep the aversion? 

It means if we catch somebody intentionally harming the innocent, we are going to be disposed to condemn and possibly punish that person. We are going to punish that person BECAUSE that person harmed the innocent. We are going to do so while asserting that harming the innocent is to be avoided for its own sake – even when it is a means to achieving some other end. 

Now, if we justify creating and promoting this aversion based on its consequences, we are going to have to ask if some other set of motives may produce better consequences. If harming the innocent is sometimes useful, then perhaps we should not be generating an aversion to harming the innocent in all cases. Where punishing the innocent is useful, perhaps we should have no aversion to punishing the innocent in those cases. 

Boonin brings up these types of considerations in his objections to rule-utilitarian theories of punishment. The rule utilitarian argues that a rule to punish the guilty but not the innocent produces the best consequences, so that is why we should adopt this rule. Any specific case of punishment is then justified in virtue of conforming to this set of rules. Boonin argues that there are three ways in which the rule utilitarian defense can fail. It can fail in virtue of the fact that it is not the case that the best rules always prohibit punishing the innocent. It can fail in virtue of the fact that the rule utilitarian defense still implies the uncomfortable conclusion that if such a rule maximized utility, then punishing the innocent would be justified. Finally, the rule utilitarian defense needs to explain why it is not permissible to violate the rules when violating them would produce the best consequences. Boonin argues that a moral-aversion theory would have to handle the same set of objections.

I am going to look at these issues in the next post.

Tuesday, October 25, 2016

Sam Harris and Peter Singer's External Moral Reasons

A member of the studio audience pointed me to a podcast of a discussion between Sam Harris and Peter Singer, where they spent the first few minutes discussing meta-ethics. The relevant part of the discussion takes place in the first 30 minutes or so of Episode 48 of Waking Up with Sam Harris

Both Harris and Singer reach a quick agreement that moral value exists in states of affairs that somehow generate reasons for everybody to act so as to realize such a state. Somehow, when matter arranges itself in particular ways, these reasons emerge and they are automatically applicable to every sentient creature, regardless of their individual interests or concerns.

Explaining what these external regions are, how they emerge, how they interact with every sentient creature, and how we know they exist is a problem. It is such a problem, in fact, that the most reasonable position to hold is that external reasons - like God - do not exist. They are pure fiction.

In fact, there is a close relationship between building morality on a foundation of external reasons and building morality on a foundation of divine command. God is, after all, the ultimate external reason. The atheist who believes in external reasons is an atheist who believes in God without the attributes of personality - at least insofar as the relationship between the object of that belief and morality goes. The commandments of external reasons - the commandments that external reasons theorists preach about - have the same status as divine commandments.

Let me make good on this claim that Harris and Singer are postulating the existence of external reasons.

Harris and Singer ask us to imagine two possible worlds. In the first world, every conscious creature is enduring as much suffering as possible for as long as possible. The other world contains less suffering. Harris tells us that it is a simple fact that the second world is better than the first world. We are then told that this implies that everybody has a reason - that there exists some sort of reason - to bring about the second world rather than the first world.

The fact that this is an external reason is grounded in the fact that this reason applies to everybody, regardless of their individual concerns.

Let us imagine a slightly different scenario. Let us imagine a universe in which every being but one is suffering as much and as long as possible, and one being is content and happy. Compare this to an alternative universe in which all being but one is content and happy, and one being - the same being that is content in the first universe - is suffering as much and as long as possible. We can agree that it is true that the second world is better than the first world. However, it does not follow that the one creature who is contented in the first world has a reason to choose the second universe over the first. If he has a reason, what is it? Where did he come from? How did he get it? If, when you point out to him that the rest of the world is suffering as much and as long as possible, when he shrugs his shoulders and says, "So what? What is that to me?" what is your answer?

There is no answer. External reasons do not exist. Internal reasons are the only kinds of reasons that exist.

Bernard Williams defended a concept of internal reasons (all reasons that exist) that I find quite useful.

A has a reason to φ iff A has some desire the satisfaction of which will be served by his φ-ing. (Williams, B., 1979. “Internal and External Reasons,” reprinted in Moral Luck, Cambridge: Cambridge University Press, 1981, 101–13.)

My pain creates in me a reason to act in ways that would reduce or eliminate that pain, but it does not create a reason for anybody else to act so as to reduce my pain. They only have a reason to act if they have a desire that would be served by removing my pain.

In other words, suffering provides internal reasons for the being that suffers. However, for everybody else, reasons to end his suffering would count as external reasons - they are, after all, external to those who are not suffering.

This is not a defense of egoism (though many egoists mistakenly believe that it is). This is because others can have a desire that nobody be in pain, or there may be specific others who have an aversion to my being in pain. Thus, they may have non-selfish reasons to act to bring about an end to my pain. Non-selfish desires exist. The egoist equivocates between the fact that a person's reasons to act are found in that person's own desires and aversions (which is true), and a person's desires and aversions are always self-interested (which is false).

Desires and aversions create reasons to act for the people who have them. No desire or aversion creates a reason for anybody else to act. Nothing other than desires and aversions create reasons to act.

It follows as a matter of fact that, in a universe in which every being suffers as much as possible for as long as possible (an example that Sam Harris uses) is that every being has a reason to remove itself and those others that it cares about from that situation. It is built into the definition of suffering that it is a state that the sufferer has an aversion to being in - and thereby has a reason to avoid. If the sufferer shrugs its shoulders and says, "This isn't bad. I kinda like this," then it is not suffering.

However, it is not the case that any individual has a reason to remove any other individual from that state. Being A has a reason to remove being A from that state, but has no reason to remove being B UNLESS being A has a desire that would be fulfilled by removing being B. If, for example, A is in love with B and is perhaps suffering BECAUSE being B is suffering, then A would have a reason to remove B from that state. However, in the absence of a desire, an agent has no reason to act.

Here, a lot of atheists make another mistake. If we cannot be realists about external reasons, then we cannot be realists about morality. External-reasons realism and moral realism walk hand in hand, sharing the same fate.

This is not true.

Internal reasons are real. That is to say, desires and aversions are real and help to explain observations we make in the real world (e.g., the behavior of intentional agents). An objective morality built on internal reasons would be real.

In fact, I am a moral realists. There are moral facts. However, no moral fact depends on the existence of external reasons - just as no moral fact depends on the existence of God.

There are cases in which it is objectively true that A has a desire that P. There are cases in which it is objectively true that φ-ing will fulfill this desire that P. Consequently, there are cases in which it is objectively true that A has a reason to φ. This means that there are cases in which it is objectively true that A ought to φ.

There are different meanings of "ought". Here, I am not talking about a moral ought. I am talking about a prima-facie practical ought - a practical ought. This is an ought that exists, but can be overridden by counter-weighing reasons not to φ. If this were the only reason that exists - if there were no counter-weighing reasons not to φ, then it is also true that the agent ought to φ all things considered. Why not? The only kind of reason that can be given not to φ would, itself, have to come from another desire served by not φ-ing.

Now, if A is in a state where A is suffering as much and for as long as possible, then A has a reason to remove itself from that state. If A can remove itself from that state by φ-ing, then A has a reason to φ.

Now, let φ-ing be an act that makes a change in agent B. For example, let us assume that A is suffering is being B is hitting him with a stick. Let us further assume that A can cause in B an aversion to hitting B with a stick. A now has a reason to φ - to cause B to have an aversion to hitting A with a stick. As soon as B has an aversion to hitting A with a stick, B has a reason to stop hitting A with a stick. This reason - as well as the reason that A acted on to bring about this change in B - are both internal reasons.

These are all knowable facts in a universe where these reasons exist. Nothing here is imaginary or just a matter of opinion.

We can talk about these same types of facts being true in whole communities. It is not at all difficult to see that people generally have many and strong reasons to promote aversions to lying, thieving, raping, murdering, breaking promises, assault, and a list of similar act-types. Another of the things that people generally have many and strong reasons to promote that Harris seems particularly interest in is an aversion to responding to mere words with violence – a moral aversion that can be described as "the right to freedom of speech". Women – and anybody who cares about them – have reason to cause people to have an aversion to denying women the opportunity to live rich and fulfilling lives. The reasons for establishing these states are all internal reasons – no external reason exists.

Saying that people generally have many and strong reasons to bring about these aversions does not imply that each individual has a reason to bring about these aversions. This is true in the same way that the fact that the average height of a group of people in a room is 5'8" does not imply that every individual is 5'8". However, it is objectively true nonetheless. One cannot disprove it by pointing to a person who is 5'4" - and one cannot disprove the fact that people generally have many and strong reasons to promote an aversion to theft by pointing out that Frank does not have a reason to promote an aversion to theft. Though, if Frank has a reason to be secure in his property, he certainly does have a reason to promote an aversion to theft.

Importantly, we have a way to cause people to have desires and aversions. The reward system of the brain processes rewards and punishments to change what individuals like and dislike - it alters their preferences. Praise functions as a type of reward to produce desires, while condemnation acts as a type of punishment to produce aversions. There is a reason why rewards (such as praise) and punishments (such as condemnation) are built into the heart of morality. These are the tools we use to mold desires - to create such things as the aversion to breaking promises and the desire to repay debts.

Consequently, the claim that people generally have many and strong reasons to φ, where φ = condemn and punish thieves, is an objective fact. The same applies to praising acts of charity and self sacrifice, as well as condemning lying, rape, murder, assault, slavery, genocide, and the like.

All of these facts are built entirely on internal reasons. We do not need to postulate any external reasons. Nor do we need to postulate the existence of any gods.

If you want to have moral claims that are true in the real world, then you are going to have to have a morality that is grounded on internal reasons alone. Internal reasons are the only kinds of reasons that exist. Building morality on external reasons - as Harris and Singer have done - is as flawed as building morality on the existence of a god.

Punishment: Deterrance vs. Moral Aversion

Classes start in 307 days.

I am still working on Dr. Boonin's book, The Problem of Punishment.

In previous posts, I presented what I am now calling a moral aversion theory of punishment. The purpose of punishment is to create moral aversions.

I then explained how this theory fits within Dr. Boonin's own definition of punishment.

In Chapter 2 of his book, Boonin looks at utilitarian theories of punishment and aims to show that they fail to justify punishment. He is continuing to provide excellent arguments that are dramatically improving my level of understanding of the current understanding, not only of the philosophy of punishment specifically, but moral theory in general.

Here, I am looking at some of the claims that Boonin makes regarding utilitarian arguments for punishment and his objections to them with an eye on what moral aversion theory adds to that debate.


Part III: Consequentialist Theories of Punishment.

Moral aversion theory has am immediate implication for the costs and benefits of punishment. It points to a new set of benefits – the establishment of a moral aversion and its good consequences.

Utilitarian defenders of punishment have traditionally looked at three major categories of consequences. There are the consequences to the person being punished - which are generally negative and provide a reason not to punish. These are matched up against specific deterrence (preventing the person being punished from committing future harms) and general deterrence (preventing people other than the agent from committing similar harms). Whereas the benefits of specific and general deterrence are greater than the costs of specific harm, punishment is considered justified.

Boonin provides a number of arguments against this formula. In many cases of punishment, the person punished is able to commit future harms - on other prisoners at least. More importantly, he argues how this formula also sometimes justifies punishing the innocent. A particularly objectionable form of punishing the innocent is punishing the children of those who are guilty since it may provide a stronger deterrence than punishing the guilty person themselves.

Moral aversion theory offers another benefit to punishment. This benefit functions much like general deterrence, but is different in some significant ways.

On standard deterrence theory, let us assume that an act is punished by a fine of $1000. Let us assume that the agent holds that his chances of being caught are 5%. This means that the deterrence value of this law is $50 (or 5% of $1000). If he can gain something worth $60, then the fine provides inadequate deterrence and the agent would – and perhaps even rationally should – perform the illegal action.
 
Assume now that an agent acquires a moral aversion such that, “I wouldn’t do that even if you paid me $1000.” Let us keep the assumption that the agent believes that she has a 5% chance of getting caught. This further fact is irrelevant. The agent who is so averse to performing an act that she would not do so for $1000 is not going to care whether or not she gets caught.

In fact, in cases where the chance of getting caught drops to zero, we lose the deterrence value of punishment entirely. The only thing we have to keep the agent honest are her moral aversions – the fact that she has acquired such a dislike for performing acts of that type that she will not perform an act of that type even if secrecy can be guaranteed.

When we add the value of moral aversions to the value of specific and general deterrence, we have even stronger reasons to engage in punishment.

However, this, in itself, is not going to address the problem such as punishing the innocent. Secretly punishing the innocent would be one of the ways that we can create these moral aversions - since punishment works not only on those who are punished but those who discover it. It may even be the case that punishing the children of those who are guilty has such an impact on the reward centers of the brain that this, too, would provide a reason for these types of vicarious punishment.

So, we still have a problem with a consequentialist justification for punishment - a problem that I will take up in my next post.

Monday, October 24, 2016

Boonin's Definition of Punishment

308 days until my first class.

In our top story today, Atheistically Speaking has posted the podcast interview in which I comments on how implicit biases are impacting the election. Atheist Ethics and Implicit Biases with Alonzo Fyfe. I hate listening to myself, so you can tell me if it is any good.

11 days until I show up at the University of Colorado Department of Philosophy to introduce myself.

I am still working on my response to Dr. Boonin's book The Problem of Punishment. It is an extremely good book - not only because of its arguments concerning punishment, but as an excellent summary of the basics of moral theory. He has a short section on motive utilitarianism that I have just gotten to . . . I will have a lot to say on that issue.

Meanwhile, I am still writing my response. Attached is a draft of Part II of that response.

Part I on The Theory of Punishment can be found here.

Part II: The Definition of Punishment

To make sure that this discussion remains relevant to the topic at hand, I would like to look at it under the light of the definitions of punishment that you provided in your book.

You identified a number of criteria at least for the type of punishment you sought to examine in your book, and I would like to provide an account of how we can understand those elements of reward theory in those terms. Punishment, in this theory, involves creating states to which agents are averse, or removing states that an agent desires, to trigger the reward system and thereby modify an agent’s desires and aversions - specifically, to promote aversions to certain act types such as vandalism, lying, and breaking promises. 

Harm

You specify that one of its requirements or punishment is that it involves harming the person being punished. 

On the theory being put forward here, punishing a person involves activating the reward centers of the brain by either creating a state of affairs to which the agent is averse (e.g., causing that person pain or some form of psychological discomfort) or removing a state of affairs that the agent desires (e.g., taking away money with a fine, taking away freedom with imprisonment). Both of these types of actions count as harming an individual. 

This is to be contrasted with a reward, which is creating a state of affairs that the agent desires (e.g., a cash award or public acclaim) or removing a state of affairs to which the agent is averse (e.g., removing the soldier from K.P. duty). 

One could object that, in shouting at somebody or in berating them, one is not actually harming them. Joel Feinberg (Harm to Others: The Moral Limits of the Criminal Law, Oxford University Press, 1984) asserted that to count as harm, the action or state must be one that thwarts a “strong and stable” desire. What thwarts a weaker or transitory desire only “hurts” an individual. 

This distinction may prove relevant. However, to introduce it here begs some questions. If it is the case that there is some severity of harm that it would be wrong to cross, we will need to find out where that point is there is a point at which “hurt” becomes “harm”, we are going to have to ask some questions about where that point can be found, and whether all state punishment can be found only on the far side of that line. On the other hand, if the objections to punishment apply to all forms of punishment, then the fact that it rules out yelling at another person or condemning/criticizing her is something we need to look at. 

In other words, if it turns out that creating states to which an agent is adverse in order to mold the interests of others is sometimes permissible, and state punishment is prohibited only because of its intensity or category, we are still going to need to answer some questions. What is the threshold or the category of punishment that is prohibited? Why that particular threshold or that particular category? If we have no answers to these questions, then we may have to ask whether our arguments regarding punishment implies a moral prohibition against ever intentionally activating the reward centers of the brain by creating a state of affairs to which an agent has an aversion or that lacks something the agent desires. 

Your definition mentions “harm”. The moral aversion theory of punishment being presented here mentions states of affairs that contain elements to which an agent has an aversion or removes a state which the agent desires. I will have more to say on the relationship between these concepts later. 

Intention

Your second criterion for punishment is that the harm is done intentionally. Accidentally causing harm - as when we accidentally step on another person's foot.

An accident such as this could activate the victim's reward center and trigger an aversion to crowds or to others walking too close. This is punishment in the psychological sense, but is not punishment in the sense that we are concerned with here. 

The moral aversion theory I am proposing says that the punisher triggers the reward system intentionally. The punisher may not know that this is what he is doing. However, it is also the case that a person who takes a drink of water may not know that she drinks H2O. In the case of harm, the agent knows that he intentionally inflicts harm – and that is sufficient. 

Retribution 

A third criterion that you proposed is that punishment involves retribution, in a sense. "[T]o be a punishment, an act must involve intentionally harming someone because he previously did a prohibited act." 

The moral aversion theory says that agents trigger the reward system because the person being punished performed an action that the punisher wants to make the object of an aversion. To promote an aversion to lying, we punish those who lie. To promote an aversion to vandalism, we punish vandals. To promote an aversion to breaking promises, we punish promise-breakers. We punish promise breakers because they are promise breakers, vandals because they are vandals, and liars because they are liars. 

On this matter, I want to pull one item off this list and set it aside for separate consideration – punishing somebody “because he broke the law.” This creates problematic cases. What about a law that prohibits helping runaway slaves, or that requires people to help in rounding up Jews? Or even a law prohibiting homosexual acts? Is there an obligation to refrain from certain acts just because it is illegal? There is reason to suggest that the law cannot legitimately punish people who cannot be justifiably punished for other reasons. The judicial system simply puts a wrapping around that punishment, establishing rules and procedures but not justification. I do not know if I can actually defend "punishing somebody because he broke the law". But I may be able to defend punishing vandals, thieves, rapists, and murderers because they performed an act of vandalism, theft, rape, or murder.  

Reprobation 

Another criterion of punishment you mention states that punishment must contain an element of admonition or condemnation of the person punished. “[T]o count as a punishment for an offense, the act must express official disapproval of the offender.” 

Moral aversion theory says that punisher trigger the reward system to create an aversion to act-types such as lying, breaking promises, assault, and vandalism. They identifies these things as bad, and as something that the agent (all agents) should dislike.

Charging somebody a fee for a marriage license doesn’t say, “getting married is something you should not be wanting to do.” Fining a person for littering, however, does carry that kind of message. 

Authorization

Your final criterion you mentioned said that to count as punishment, the act must be performed by an agent who has the authority to punish. 

This may be a useful criterion to identify the subject of this discussion. However, it does introduce a potential problem when it comes to justification. 

Let us assume that we can identify a case of punishment which has all of the elements of punishment except that it is not carried out by somebody authorized to do so. In the one case, punishment is considered justified while, in the other, it is not. Perhaps you will want to say that this is not punishment. Still, the fact that it has all of the elements of punishment except the "authorized agent" element, and it is permissible, will invite us to ask what it is about introducing the "authorized agent" element that makes it impermissible.

You looked at examples in which it was considered immoral to harm an agent except when one was an authorized agent seeking to punish a wrongdoer. You asked why it was permissible to treat the wrongdoer differently. This approach suggests that the wrongdoer is not being treated differently. Rather, the punishment is justified in the absence of an authorized agent. The authorized agent is simply a part of the wrapping that we put around certain examples of punishment that is ultimately already justified.

Summary

Punishment, within the moral aversion theory, is acting to realize states to which the agent being punished is adverse, or to remove states that an agent desires, intentionally, because the agent performed an act type which people have many and strong reasons to promote an aversion to performing, as a way of promoting aversions to those act types. It need not be carried out by people in authority. However, because it involves intentionally causing harm, there are reasons to regulate and restrict its application - to establish rules to help ensure that it serves its purpose well.

Friday, October 21, 2016

A Theory of Punishment

311 days until classes start.

I have officially started my project to have a paper written for Dr. Boonin on Punishment by the time I visit the philosophy department on November 4.

It is practice for writing papers as a graduate student. Dr. Boonin wrote a book on The Problem of Punishment which discusses a number of issues that I am interested in investigating. I am writing some of my ideas up as comments on the claims he made in that book.

I will let you have a peek at what I have gotten so far, and you can tell me if I am making any gross (or not so gross) errors - if you please.


Dr. Boonin 

I have read your book on punishment and find it to be a most excellent resource on the arguments that have been used with respect to this subject. It is quite well organized - even encyclopedic - as well as being exceptionally well argued.

In light of this, I would like to know your opinion regarding an argument for punishment that I did not see in your book - which I would like to present to you here. 

The view that I would like to present has consequentialist elements. However, I believe you would find that it fits more comfortably under the category of a "moral education" theory. However, it differs significantly from the moral education theory that you discussed in your book The theory you discussed approached the project of "moral education" as a project to upgrade the beliefs of the person being punished. The version of the "moral education" theory that I wish to present focuses instead on molding desires - not only the desires of the person punished, but molding society generally across a population.

A relevant difference between the two types of moral education can be found in the different ways in which beliefs and desires are changed. To change an agent's beliefs, we are ideally to use evidence and reason. However, evidence and reason are not the appropriate tools for changing desires. You cannot reason a person out of a love for chocolate milkshakes or into a fondness for baseball. There are activities that appear to look like reasoning. However, I will argue that, insofar as their impact is on desires rather than beliefs, something other than reasoning is going on 

The primary tools for changing desires are rewards and punishments. Here, I am referring to punishments in a biological sense – not in the moral sense. If an action results in an electric shock, for example, this shock is considered "punishment" insofar that it is something the creature has a reason to avoid, even though it carries with it no moral implications.

Praise and condemnation function as reward and punishment in this regard. Praise is something that people tend to experience as pleasant and that they seek for its own sake, so it functions as a reward. Condemnation, on the other hand, is something which people tend to be averse to experiencing and avoiding for its own sake - thus, counts as a punishment in this sense.

We can "persuade" somebody to like baseball in this sense - perhaps by pointing out elements in the game that the other person likes but did not know about, but also by praising the love of baseball while pointing out these features, and allowing this praise to work on the reward system of the agent.

The same applies to rewarding/praising such things as honesty, charity, and responsibility while punishing/condemning such things as dishonesty, theft, cruelty, and carelessness. These mold the desires not only of the agent being rewarded or punished, but the desires of people generally. This is the form of "moral education" that I wish to discuss.

The Reward System

Understanding this theory of moral education would benefit from understanding at least some of the basics of the reward systems in the brain.

The reward system primarily (though not exclusively) connects three regions of the brain – the ventral tegmental area (VTA) which is responsible for regulating the neurotransmitter dopamine, the nucleus accumbens which regulates wanting and pleasure, and the frontal cortex where an agent plans intentional actions.

The frontal cortex is responsible for conforming behavior to social norms. This is illustrated in the case of Phineas Gage – a railroad worker who survived having a tamping rod blasted into his skull, entering the jaw and exiting the top of his skull near the forehead. The accident inflicted significant damage to the frontal cortex. As a result, Gage, who had previously been known as a responsible and conscientious individual, became disposed to offensive and irresponsible behavior.

The hypothesis under consideration suggests that reward and punishment are used to influence behavior by impacting the reward center of the brain and, thereby, determining the rules that govern an agent's intentional action. It does not do this by altering beliefs or any other cognitive state. Instead, it does this by altering an agent's likes and dislikes. It creates in agents a desire to help those in need, or an aversion to taking the property of others without their consent.

Please note that I would love to have the opportunity to investigate these claims further. I do not have the backing for them that I would like. It is a part of my reason for wishing to attend graduate school that I may have an opportunity to look into these considerations further. In the mean time, it is, at best, a hypothesis. However, perhaps I can at least support the suggestion that it is worthy of additional investigation.

Reward and punishment – including praise and condemnation – are not limited in their effects to only the person rewarded or punished. It is also the case that those who witness rewards or punishments experience an effect very similar to that of the person rewarded or punished. Mirror neurons fire in the brain of a person who observes somebody suffer an injury, and the signal follows almost exactly the same mental pathways as they do for the person harmed. If one person experiences a blow to the hand, the observer will cringe and clutch his hand protectively as if it had been struck.

Similarly, empathy allows each of us to feel what others feel (at least in a normally functioning brain).

In fact, an agent does not even need to witness reward or punishment in order to experience some impact on her reward system. It is enough to hear about a case in which an agent had performed some deed and obtained a reward, or committed some act and been punished for it. The imagining is sufficient. As such, even a parable or a story is sufficient to teach a moral lesson – to alter the likes and dislikes of those who hear or read it, at least to some extent.

In all of these cases, rewards and punishment (including praise and condemnation) are used – not to alter an agent's beliefs about what is right and wrong – but to mold an agent's affective states. For example, by rewarding and praising honesty – and by punishing and condemning dishonesty – we cause agents to have a desire to engage in honest behavior and an aversion to acting dishonesty. These desires and aversions promote honesty – even in cases where honesty might otherwise have been to an agent's advantage. An agent, the observer, the individual reading or listening to a story having a moral lesson, comes to value honesty for its own sake, and not merely as a means to achieving other ends.

The learning of these social norms - the acquiring of these desires and aversions - is the type of moral education that I would like to examine.

Thursday, October 20, 2016

Free Will and the Praise and Condemnation of Moral Projects

312 days until classes start.

On November 4, I shall go to the University to attend a "Center Talk" (put on at the Center for Values and Public Policy). At that time, I would like to present Dr. Boonin with a paper on punishment. Boonin wrote the book, The Problem of Punishment. So far, in looking at the book, it seems that he did not consider the option that condemnation and punishment serves the function of molding desires, not only in the person condemned or punished but in the society as a whole. Perhaps I can convince him to consider this possibility.

In other news - in my recent studies, I have encountered a similar argument regarding free will and moral responsibility in two different and disconnected places.

Before I discuss them, I want to point out that the argument has important implications. It has to do with what we are responsible for - of when we are blameworthy. The argument that I will be considering stands against a claim that lets us off the moral hook far too easily, and allows those who hold this claim to shrug their shoulders at a significant group of wrongs.

I wanted to stress this importance since the argument being considered and the response seem to be dry, academic, and of having no real significance. That appearance is deceiving.

The claim being addressed states that an agent is not to be held morally responsible unless, at the moment of action, we could have done otherwise. According to this claim, if I tell a lie and, at the moment in which I tell it the determined forces of nature make it the case that I could not have told the truth, then there is no legitimate reason to condemn me for that action. I did nothing wrong - at least in the sense of "wrong" that implies "deserving punishment or condemnation". I could not have done otherwise.

One of the sources where I encountered this issue was in Jules Holroyd, "Responsibility for Implicit Bias", JOURNAL of SOCIAL PHILOSOPHY, Vol. 43 No. 3, Fall 2012, 274–306, 2012.

Implicit biases work in the background. We certainly have no capacity, at the moment of action, to turn them off. We may not even know that they are there - which eliminates any ability to consciously override them. By the application of the principle that we are not morally responsible when we cannot choose a different action at the moment of action, we are not responsible for actions motivated by our implicit biases.

Holroyd rejected the idea that moral responsibility depends on having some type of immediate control over our actions. He points out that we have the capacity to take on projects such as gaining or losing weight, learning a foreign language, or playing the piano. These are not moral projects, yet they are projects where it makes sense to say that the agent is responsible. We can praise or condemn the agent who succeeds in these projects - even if it is not moral praise or condemnation.

There are similar long-term projects where the responsibility has a moral component. A physician has a moral responsibility to acquire a great deal of knowledge and understanding of medicine. An engineer must acquire understanding of the relevant physical laws and their application. Neither the physician nor the engineer can suddenly choose to have the knowledge necessary for their profession. Acquiring it takes time and effort. Yet, the physician and the engineer not only have an obligation to acquire that knowledge, but to keep up on new findings as they come out. This is a part of their moral responsibility.

More generally, we may say that people have a moral responsibility to put themselves in particular states - states that some may say have intrinsic merit, or which are socially useful. Similarly, we may say that moral obligation extends to avoiding being in states that motivate an agent to act in ways that are harmful or unjust. Having an implicit bias is such a state. Putting oneself in such a state may not be doable in an instant, but that does not prevent it from being a moral responsibility.

Henry Sedgwick makes the same point in his discussion of free will. He also claims that it makes sense to morally evaluate a current action even if it is based on acquired traits. Similarly, we regularly hold that moral obligations may be attached to current actions that establish the traits that govern our action in the future.

[E]ven as regards our own actions, however `free' we feel ourselves at any moment, however unconstrained by present motives and circumstances and unfettered by the result of what we have previously been and felt, our volitional choice may appear: still, when it is once well past, and we survey it in the series of our actions, its relations of causation and resemblance to other parts of our life appear, and we naturally explain it as an effect of our nature, education, and circumstances. Nay we even apply the same conceptions to our future action, and the more, in proportion as our moral sentiments are developed: for with our sense of duty generally increases our sense of the duty of moral culture, and our desire of self-improvement: and the possibility of moral self-culture depends on the assumption that by a present volition we can determine to some extent our actions in the more or less remote future. Henry Sidgwick, METHODS OF ETHICS, Book I, Chapter V, Section 2

In short, "free will", insofar as it is necessary for moral responsibility, does not require that an agent be capable of doing something else at the very instant the action was done. It applies as well to the historic choice of actions that established the traits that determine present and future action. Somebody making a moral evaluation can legitimately acknowledge that the current action was based on established traits of character, but can still be condemned (for example) on the grounds that "You failed to cultivate the traits of character you should have."

Desirism, with its concern for cultivating good desires and aversions, is quite comfortable with all of this - though it can actually take the argument one step further.

At this point, somebody may object that the motivation to cultivate a good character itself must come from somewhere. We must either assert that the agent had the free capacity to choose to acquire a good character, or needed first to acquire the character that would motivate him to become a person of good character. Rejecting free will, we are left with the second option. Yet, the second option only pushes the problem back one more step - we must assume having a character that would motivate an agent to acquire a character that would be motivated to be a person of good character. This is an infinite regress.

Desirism breaks this infinite regress by introducing reward, praise, condemnation, and punishment as the tools for motivating character. They need not come from an act of will. They come from the reasons that agents have to promote certain desires and aversions.

Wednesday, October 19, 2016

The Pragmatic Error of Undefinable Terms

What definition can we give of 'ought', 'right', and other terms expressing the same fundamental notion? To this I should answer that the notion which these terms have in common is too elementary to admit of any formal definition. (Henry Sidgwick, Methods of Ethics, Book I, Chapter III, Section 3)

I object.

Language is a tool - an invention that we may design and build as suits our purpose. If a term does not admit of a formal definition, then this implies nothing more than that our language is poorly designed and constructed, and that some modifications are in order. It is within our power to introduce a new set of formal definitions - those that we think will suit our purpose - and to see how far those definitions can take us.

In many cases, going to the effort of introducing precise definitions for vague and ambiguous terms is not worth the effort. For example, I would not suggest that we go to the effort of standardizing and specifying a definition for the word “game” – because . . . why? It would take a great deal of effort to standardize its use in the language as a whole. There is no corresponding benefit to compensate for that effort.

However, morality is an important subject. Lives hang in the balance. For practical reasons, where we do not have formal definitions of moral terms, we should adopt some.

It is important to note that adopting a set of formal definitions need not alter any of the conclusions that we adopt using those terms. We are choosing a language. A proposition that is true in one set of definitions would be just as true in another set, in the same way that a proposition that is true in English would be just as true in French.

But not all languages are equal. If this were the case, it would not be possible to improve a language – only to change it. We do improve a language – such as when we introduce new terms that allow us to efficiently communicate about things.

In ETHICS: INVENTING RIGHT AND WRONG, J.L. Mackie brought up the fact that we can choose our definitions. If a definition we are using is flawed in some way, we can discard it and bring forth a new definition that suits our purposes.

Specifically, Mackie argued that moral terms contained, as part of their meaning, a claim that its objects of evaluation contained an element of intrinsic prescriptivity - an "ought-to-be-doneness" or "ought-not-to-be-doneness" built into them that provides the reason for doing or forbearing from that action.

These properties do not exist. Therefore, Mackie argued, we should remove this from the meanings of our moral terms, and continue to use those terms with only the remaining parts.

He explained this move by noting that this is exactly what scientists have done with the meaning of "atom". It used to mean, "indivisible particle". When scientists discovered that the smallest bits of an element had parts, they dropped "indivisible" from the meaning of the term "atom". They continued to use it to the smallest parts of an element.

The terms that make up the elements are another set of terms that scientists have redefined in the light of new knowledge. "Gold", "Sulphur", and "Carbon" all got new definitions based on the number of protons in their atomic nucleus. Water became "H2O" and table salt became "NaCl".

Indeed, the practice of redefining terms for pragmatic reasons is rampant in science. In 2006, Pluto ceased to be a planet because astronomers wanted to classify like things with like. The discovery of Pluto-like objects in the Kuyper Belt meant removing Pluto from the "Planet" family and putting it in Kuyper Belt Object family.

If one had done a conceptual analysis of "Planet", it would have shown that Pluto certainly was a planet. All competent users of English counted it as a planet. It would have remained so if conceptual analysis had the deciding vote. Quite obviously, competent English speakers were calling Pluto a "planet" up until 2006. Students told to name the nine known planets would get marked wrong if they did not include Pluto on their list.

However, conceptual analysis did not have the final vote on this issue. The International Astronomical Union had the final vote and they took their vote on September 29, 2006. That is how the issue was settled - by a vote of individuals considering the practical implications of the various options available.

Perhaps moral philosophers could profit from adopting the same sort of system. It could name an international body responsible for the definitions of terms, and can then establish a set of standards for those terms. As those definitions came into conflict with reality or became impractical, the body could convene to vote on modifying those definitions in the light of new information. In the meantime, philosophers would have a standard set of definitions that they could use in their discussion.

There would still be room for debate and discussion - in the same way that astronomers debated and discussed the definition of the term "planet". People would support their candidate definitions - and object to conflicting proposals - on such grounds as whether they described differences that we could identify in the world, the degree to which they were consistent with common use or, instead, were likely to cause confusion and error, or even whether (as was argued in the case with Pluto) it would upset school children.

Even without such a committee, the main point is that when a philosopher reaches a point where a term seems to "admit of no formal definition", we need not accept that verdict. The best option at that point may well be to simply stipulate a formal definition, and see how far one can take it.

If I could - some of the definitions that I would propose would like to propose says:

"Ought" (when applied to intentional action) = "Is such as to fulfill the desires in question."

"Practical Ought" = "The desires in question are those that the agent has or will have."

"Moral Ought" = "The desires in question are those that the agent should have, which are those that people generally practical-ought to promote or inhibit using the social tools of reward, praise, condemnation, and punishment."

"Has a Reason" = "An agent has a reason to perform act A if and only if doing A will serve one or more of the agent's desires." (A modified version of Bernard Williams' proposal.)

"There exists a reason" = "There exists a reason for an agent to do A if and only if there exists one or more desires that would be served by the agent's doing A".

"Obligation" = "An act is obligatory if and only if a person with good desires and lacking bad desires (see 'moral ought') would perform that action."

"Prohibition" = "An act is morally prohibited if and only if a person with good desires and lacking bad desires (see 'moral ought') would not perform that action."

"Non-Obligatory Permission" = "An act is permitted but not required if an agent with good desires or lacking bad desires might or might not perform the action - depending on other interests."

The ultimate point being, the fact that these definitions do not conform to certain linguistic intuitions would not be an objection against them. These definitions may not conform with our sloppy and confused understanding of these terms - but we have no obligation to keep those sloppy and confused definitions. The point is to replace those definitions with something that is useful.

Monday, October 17, 2016

Desirism: Talking Points

I had an opportunity over the weekend to address the question, "How would you describe and defend Desirism in quick and simple terms that would fit into the opening segment of a podcast?"

I began with the assumption that I was given an opening either by being asked about desirism directly, or asked about whether values are objective, whether there is an actual right or wrong, or whether I could defend a moral claim without reference to any type of deity. All of these can be turned into a version of the question, "Are moral values objective?"

I then wrote down some notes:

They start with, "That's a hard question to answer..."

I think a lot of people are caught in a false dichotomy. No matter how I answer the question, a lot of people are going to read things into the answer that aren't there.

If I say that I believe in objective moral value, a lot of people are going to assume that this means that I believe that when matter is organized in particular ways - such as can be described as an "act of charity" for example - that some type of special value property emerges - a type of "goodness" that humans can sense and, when we sense it, we are motivated to create more of it.

I do believe that moral values are objective - but I do not believe any of that stuff.

And if I say that I reject the idea of objective value, a lot of people are going to assume that this means that I hold that values are merely a matter of opinion. To say that slavery or genocide are wrong is to simply say that I don't like doing it and other people don't like doing it - but it is not really wrong. We can make slavery or genocide perfectly good - even admirable institutions - just by liking them.

I do believe that value depends on desire but it is not so simple as saying that simply liking something makes it morally good.

I hold that values are real. However, they exist in the form of relationships between states of affairs and desires. But not simple relationships - not in the sense that just liking it makes it morally good. The relationships are more complex - and basically go into answering the question of what we ought to desire, not what we desire in fact.

You cannot explain events in the real world without postulating beliefs and desires. Desires are real. The relationships between desires and states of affairs are real. And the complex relationships that go into asking and answering the question of "what ought we to desire" are real.

I have mentioned this story a few times. When I was 13 years old, I think, I put my hand on a hot metal plate. It wasn't glowing hot, and I did not think it was hot, but it was. Shortly thereafter, I had blisters growing on the palm of my hand - second degree burns.

That HURT!

Pain is real. And the awfulness of pain is real. Anybody who claims that the awfulness of pain is not a part of the real world - does not represent an objective fact - is simply wrong.

One could neither explain or predict the events that occurred after I put my hand on that plate unless one included in that explanation the awfulness of pain.

Our own experiences of pain give us a real-world reason to arrange our environment in such a way so as to reduce the chance that we will be put into a situation of experiencing pain. I have a reason to avoid my pain. You have a reason to avoid your pain. I would bet that almost all of your listeners have a reason to avoid their pain. These reasons exist as objective fact.

One of the ways in which I can reduce the chance of experiencing pain is by motivating others to avoid doing things that would result in my being in pain. You have reason to motivate others to avoid causing you pain, and the same is true of your listeners with respect to their pain.

There are three ways in which we can get people to avoid acting in ways that might cause us pain. The first two are commonly discussed. The third, I argue, does not get the attention it deserves.

(1) Deterrance.

We an offer people rewards if they refrain from doing things that might cause pain, or we can threaten them with retaliation if they do things that might cause pain. We act on their existing desires - promising to fulfill an existing desire, or to thwart an existing desire (fulfill an existing aversion) if they should behave in ways that cause pain.

There are two main problems with this method.

First, we can only punish those who we catch, and we provide no incentive for those who can cause pain without getting caught.

Second, we cannot always punish people even if we catch them. They may be too powerful, or they may have established systems and institutions that put them beyond the reach of those who would punish them.

These people have no incentive to avoid causing pain.

(2) Divine Retribution

Would it not be great if there were an all-knowing divine punisher who knew of every case when a person did things that caused pain and was powerful enough to punish them no matter what?

This would get us around the two problems we encountered above.

However, this method has problems of its own.

The first is that, no such entity exists. The fact that it would be great if something existed does not make it real. It would be great if there were a fountain of youth that would return us - physically - to when we were 25 years old or so. That it would be great does not prove the existence of such an entity.

The second is that we still have the problem of determining what to punish and what not to punish. Religions are invented by humans with limited intelligence and ulterior motives. We have suffered greatly from religions telling us to do things we ought not to do, not to do things we really should be doing, and giving us permission to do things we ought not to do. They then take these fallible human inventions and claim that they are the word of an all-knowing, perfectly benevolent deity. We end up with fundamentally flawed moral systems literally carved in stone.

(3) Molding Desires

Here's the option that does not get the attention it deserves.

One way we can alter the behavior of people is by altering their desires - altering their likes and dislikes.

We can avoid pain by getting people to like to do things that tend not to cause pain to others - that even tend to prevent things that cause pain, and to dislike doing things that tend to cause pain.

How do we do that?

Rewards and punishments, praise and condemnation, acting on the reward centers of the brain, alter the likes and dislikes of individuals.

Consider, for example, the taste of coffee or beer. Coffee and beer taste terrible. Yet, each contains a drug that acts on the reward centers of the brain. One of the effects of these drugs is that they alter our reaction to the taste of coffee and to beer. After a while, they start to taste good (at least for those people who drink coffee and beer - or so I have been told, since I am not a fan of either). I am always told that these are "acquired tastes", and we know how they are acquired.

Morality is an acquired taste.

We reward people who are honest and condemn those who are dishonest as a way of promoting a social "acquired taste" for honesty. We praise those who pay their debts and condemn/punish those who do not pay their debts as a way of developing an "acquired taste" for paying debts.

Once a person has acquired a taste for honesty, for repaying debts, for keeping promises, for acting charitably, and the like, then we can trust that they will act morally even under circumstances where they will not get caught.

We do not need to keep watching over the shoulder of somebody who likes coffee to make sure that he drinks coffee. If he likes it, he will choose to drink it on his own, whenever he has an opportunity to do so and other circumstances permit it (except when he thinks it may do great harm). We do not need to constantly look over the shoulder of a person with an "acquired taste" for honesty - he will be honest when he has an opportunity to do so (except when he thinks it may do great harm).

A person in power - who has acquired a taste for honesty, for charity, for keeping promises, for repaying debts, has a reason to go on being honest, charitable, and trustworthy in the same way that he has a reason to continue to drink coffee or beer or to do any of the other things he has acquired an interest in.

The one difference, of course, between coffee and honesty is that people generally have no reason to reward or praise the person who likes coffee or to condemn or punish the person who dislikes coffee. However, people generally have many and strong reasons to praise and reward those who are honest, and condemn and punish those who are dishonest. This is why drinking coffee is a matter of personal preference, and honesty is a moral virtue.

This is where morality comes from - without God. It comes from the many and strong reasons that people have to promote an acquired taste in certain types of activities - or to form acquired dislikes for activities that are generally harmful to others. We promote these acquired tastes using the tools of reward, praise, punishment, and condemnation. This is why these qualities - praise and reward on the one hand, and punishment and condemnation on the other, are so central to the institution of morality. Morality is concerned with molding desires, and these are the tools for molding desires.

Comments on Implicit Biases and the 2016 Election

315 days until the start of classes.

Yesterday, I was interviewed by Atheistically Speaking on my views regarding the 2016 election as expressed in A Moral Philosopher's View of the 2016 Presidential Election

I am a very harsh critic of myself and I seldom give an interview, or even write a post, that I find less than cringeworthy. This us a large reason why I do a fantastically poor job of self-promotion. I need to be more like Donald Trump and become so convinced of my absolute brilliance that I can more easily step out and enlighten the world.

Anyway, as s always the case in things like this, when it is done, I think of things I should have said, should not have said, should have said differently, and could not have said because of limitations in time.

Clinton's Honesty

At the top of the list, I wish I could have said more in defense of my claim that a belief in Clinton's dishonesty and corruption is so contrary to reason that we must look for something other than "the available evidence" to explain the source of this belief.

I suspect that there will be many who hold that this is such an absurd claim so as to eliminate all credibility. I can also imagine some acting in anger in being accused of having beliefs that they did not draw from evidence.

I did not give enough attention to "fallible but fast" sources of belief. There is the fact that a lot of people claim that Clinton is dishonest and corrupt - and "what a lot of people believe" is one of those fallible but quick heuristics that we draw upon. Yet, one should not give a lot of weight to the results of "fallible but fast" heuristics - because there is that "fallible" part.

As I mentioned, Clinton comes out as one of the ten most honest politicians according to Politifact (reference). One does not get a score like that by accident.

Her Foundation has a AAAA rating and gives no money to the Clintons.

We have her tax records over the past 30 plus years and we know where her money comes from. Most of it comes from books and speaking engagements.

As far as the speaking fees go, to understand that business it is useful to think of acting. You want to become an actor. You get an agent. The agent tries to book you at the highest possible fee. (They live off of a percentage.) if you can get a following, your agent can get more money. A few top actors can get $20 million per movie and the actor STILL has a choice as to which scripts to accept.

In the same way that some top actors can get $20 million or more per movie, some top in-demand speakers can get $250,000 or more per appearance.

To think that this money had bribe potential one has to think that the speaker needs the money and has no other way to get it. In the case of Clinton, these assumptions are false. Even at this rate, she can only accept a percentage of the speaking opportunities available to her.

In addition, we now have a huge number of emails - collected by Russia (see Why Experts Are Sure Russia Hacked the DNC Emails. in an attempt to disrupt the American election and manipulate its voters (according to US Intelligence sources - who express confidence in this result). These emails were not filtered by lawyers before being released. Yet, still, we are not seeing evidence of anything other than an uncorrupted political candidate working to win an election.

So, yes, I think that one can well substantiate the claim that the belief that Clinton is dishonest or corrupt cannot be built on available evidence. It has to have its foundation in "something else".


The Ethics of Implicit Bias

I stated that implicit biases are common and gave a recent story that exposed my own recent bias.

I am worried that my comments may be taken as excusing such behavior, as saying, "It's okay. Everybody does it."

The fact is, these biases hurt people. It causes them to be treated unjustly and, in the cases of some unarmed black men mentioned in recent news reports, implicit biases get innocent people killed.

It is NOT okay.

Implicit biases are learned. In fact, some people have not learned them and show no signs of implicit bias when tested.

Given the injustices that spring from implicit biases, there is a moral obligation to try to unlearn them. As "mental habits" one cannot simply turn them off. However, as with any habit, we can put effort into unlearning a bad habit (training oneself to notice when the bad habit is manifesting oneself and forcing oneself to stop) and replacing them with better habits.

In the mean time, insofar as one has implicit biases and insofar as they result in unjust action, a person with implicit biases should avoid putting oneself in a position where those implicit biases pose a threat to others. As an extreme example, cops should be tested for implicit biases and, where they fail those tests, be removed from positions where those biases may bring harm to innocent people - until, through training, they can show that they have removed those implicit biases or they no longer but other people at risk.

The same argument applies to people in a position where they hire and fire or otherwise evaluate others.

How we are going to handle implicit biases among those in charge of voting is certainly a difficult challenge.

Since biases are learned, then, one of the ways in which we can deal with implicit biases is to take steps to make sure that we do not teach them to the next generation. That does not solve the current problem, but it does help to reduce the amount of future injustice.

Third Parties

I argued that - in our political system - voting or supporting a third party in a close election counts as giving a political advantage to the major party that least represents one's views.

This situation was contrasted with voting for a third party in a race that is not close - where one's vote is not going to determine the outcome anyway.

I did not think to mention at the time that I have discussed this situation in my blog. What I argue for is that, where one party has a lock on the current offices, that the right to representation implies a right to join that party and to exercise one's power in influencing who that party selects for its candidate. The only other option is to, effectively, decide to have no voice in selecting the winning candidate and to render oneself politically impotent.

In other words, even in a district where one party will clearly win the election, in our electoral system, one should not support a third party candidate. One should not even support a second party candidate if that party is too small to win an election in that district. One should join the only governing party in that district and help in the selection of the person who will actually be representing that district in government.

Conclusion

I will link to the podcast when it is delivered. I am hoping that I can append these remarks as the first comment to that podcast episode when it is posted.

Saturday, October 15, 2016

Taxing the Rich

Taxing the Rich

317 days until classes start.

The Spring 2017 classes were posted today. I made a tentative schedule just to see what my days will be like when I return to school.

They will be very busy. Though I will not be taking classes until the fall, I used this information to create a hypothetical schedule. Once I start school, my days are going to be quite full. Up and out of bed at 3:45 to visit the gym, work, classes, and spare time commuting on the bus, reading, and writing. My days will be booked until 9:00 PM. There will be no real time for writing until the weekends, so that is when I will have to get my blog postings done.

I have finished William James' Pragmatism, and tried to get started again on Henry Sidgwick's Methods of Ethics.

However, I have been asked to consent to be interviewed for the podcast Atheistically Speaking. They wanted to talk about my posting on A Moral Philosopher's View of the 2016 Presidential Election.

I have been told that, "We wanted to discuss the idea of morality as it relates to the election and why people seem to be unable to be objective when assessing morality when it comes to political candidates."

Well, given the request for the interview I have been going over my arguments and my evidence to make sure that I can back up what I said.

Meanwhile, at the London School of Economics, I listened to a lecture on Everyday Sexism and listend to what a lot of women (and girls) have to put up with. I think that men would have learned better by now. Why is it so difficult to learn how to treat women with decent respect. Behavior like Trump's is just far too common. Anyway, it included some points that will be relevant to discussing attitudes that may be responsible for people seeing Clinton as "untrustworthy" and "dishonest" contrary to all available evidence.

In other news, I also listened to a podcast from the London School of Economics on Taxing the Rich: a history of fiscal fairness in the United States and Europe".

This proved interesting because it looked at the types of arguments that people have been accepting as good reasons to tax the rich over the past 150 years or so. They took speeches and other text from the period, coded them, and then looked for key words and phrases that would then be matched up with changes in the tax code, to determine which words and phrases might have been responsible for changes in the tax code.

The higher tax rates that we saw a few decades ago for the very rich turned out to be grounded on a type of fairness argument focusing on the war. Effectively, the argument that worked stated that in the same way that it was permissible to conscript labor (soldiers) to fight a war through a draft, it was also legitimate to conscript capital (money) to fight the war and, in doing so, to tax those who had the most money. The ranks of the soldiers, who were putting their lives at risk, was made up of young people from poor and middle-class families. It was considered unfair that wealthy older people get to stay home with their feet up having their property protected, without making a comparable sacrifice. Consequently, their wealth was taxed.

Since the 1970s, without a war argument to use to support taxing the wealthy, tax rates on the wealthy went down. Most people, according to the research discussed in this presentation, favor a "fairness" argument that determines what a just tax rate is. And what people seem to consider fair is a tax rate where all individuals pay approximately the same percentage, a "flat rate" tax.

The authors looked at a number of different arguments offered for and against higher tax rates, as well as other causes. For example, one of the arguments they examined through their research was whether or not current lowering of tax rates can be explained in virtue of the "capture" of the political system by those with a great deal of wealth. In other words, the cause of the current change was not the fact that a flat tax rate "seems fair" to the bulk of the population, but that the wealthier people were able to use their wealth to capture the political apparatus to get favorable tax rates changed.

The "capture" argument, they said, did not adequately explain the relevant data. The tax rates went down across countries regardless of any individual differences in terms of capture by the rich. Whatever the cause of these reductions in tax rates were, they were constant across countries. And what was constant across countries was the fact that they were no longer paying off a war. Consequently, the "conscript the wealth" argument had simply become less effective.

One of the things that Professor David Stasavage argued that would be an effective argument in favor of getting the wealthy to pay more taxes would be to point out that the very wealthy actually pay a lower percentage of their income than those who make less. Tax write-offs, loopholes, and the ability to hire expert tax accountants meant that the very wealthy pay a lower percentage of their millions of dollars than the rest of us pay on our thousands of dollars. Consequently, a fairness argument should be useful to make it the case that the very wealthy were at least paying the same percentage as middle-income earners. However, getting the wealthy to pay a higher percentage would be an uphill struggle given what most of the people see as "fair".

Stasavage did not, at least in this presentation, consider the question of why people feel that a flat rate tax is fair. I would like to see some investigation into the argument that they perceive it as fair because political factions who favor the wealthy have been able to spend a great deal of money arguing that it would be unfair for the wealthy to pay a higher percentage. In other words, they have been able to use their excessive wealth to get the people on a whole to adopt an attitude that favors them. If this is the case, then one possible road to take would be to get people to see a progressive tax as being more fair.

Which it is. The wealthy person for whom taxes might take $5 million out of $10 million is left with $5 million, which can purchase a great deal. This person is not suffering even the slightest in terms of health care, securing his retirement, taking care of his children, worrying about what might happen if he loses his job or suffers some other setback. However, the person who is knocked back from $50,000 to even $40,000 (a 20% reduction rather than a 50% reduction) still loses a great deal more in terms of potential risks to his well-being and livelihood as well as those in the same family. I think it is reasonable to hold that it is better to take $5 million from a person with $10 million than to take $10,000 from 500 people with $50,000 each. We have more and stronger reason to take more money from those who have a great deal of wealth than from those who are getting by.