Friday, August 31, 2018

History 001: Augustine: On Christian Doctrine

I probably would not have taken a history of philosophy class if it was not a core requirement for getting my degree.

However, I do see the value in it.

There is, of course, the classic claim that we may be able to "mine" the ancient material for insights that may be useful today. That could be true. It is also possible to mine other contemporary cultures or sources such as works of fiction or nature itself.

However, it has a more direct benefit. It provides information on how people think - how the human brain works. It is, in this sense, actual data.

It is all too easy to think that the ways in which our brains work today - the ways we put things together and conceive of them - is "the way" that the human brain thinks of things. An examination of other cultures - past and present - tell us that this is not the case. It provides data that a theory on how the brain works must take into consideration.

And a moral theory is a theory about how the brain works. At least, I hold, that any worthwhile moral theory cannot be divorced from this subject.

Our first reading from the course is St. Augustine, "On Christian Doctrine." Or, at least, the first two parts of it.

The first interesting thing to note is that this book reads like a modern, contemporary, "Introduction to Philosophy" text book with smatterings of theology put in for flavoring. One can ignore the smatterings of philosophy and see a teacher guiding his students through the preliminary lessons in the philosophy of language (signs), logic (premises, conclusions, and the relationships between them), and ethics (means/ends distinctions).

Of course, I am interested in the ethics . . . which shows up in Augustine in a discussion on the distinction between means and ends.

Augustine first divides the worlds into things and signs.

Signs, he says, are those things that signify other things. Smoke signifies a fire in that if one sees a smoke one can infer that there is a fire. Words make signs out of things that are seen (read) and heard.

But we are going to talk about things.

Things, according to Augustine, are further divided up into "things that are meant to be enjoyed" and "things that are meant to be used".

This is the classic means-ends distinction, which is very important in desirism. When talking about a desire that P, P is that which is enjoyed, and anything that can contribute to bringing about or maintaining P has use value.

Another difference is that, in desirism, P has value in virtue for its own sake, not for "being enjoyed". In fact, to talk about something as something "to be enjoyed", this seems to describe it as something that has value as a means to bringing about an experience of enjoyment, not truly as an end.

Plus, this "meant to be" phrase has some ambiguity. Exactly how is something "meant to be" used or enjoyed. And this must be taken seriously, and not just as some loose way of speaking about means and ends.

So if we wish to enjoy things that are meant to be used, we are impeding our own progress, and sometimes are also deflected from our course, because we are thereby delayed in obtaining what we should be enjoying, or turned back from it altogether, blocked by our love for inferior things.

We can make sense of this, in part, in recognizing that something can have a use value that somebody is not aware of. That is to say, it may be the case that Q is useful in bringing about P, even though the agent with the desire that P does not know this and, as a result, is hindered in his ability to bring about P. So, all seems fine so far.

Now, about this "enjoyment".

The first note that I want to make is that Augustine seems to make a claim very much like one later found in John Stuart Mill. This concerns the relationship between things having value for their own sake and happiness being the only end. Mill resolves this conflict by saying that these things that are valued for their own sake are "a part of happiness". Augustine states that enjoyment consists in valuing something for its own sake. "Enjoyment, after all, consists in clinging to something lovingly for its own sake."

And, second, while we can express valuing something for its own sake in terms of a desire that P, Augustine also states that there are some things that deserve to be valued and some that do not. He uses this distinction to make a related distinction between real happiness and what we might call fake happiness. Real happiness is the valuing of something - a desire that P - where that something (P) is something that one ought to be enjoying. Valuing as an end that which should serve only as a means is a perversion. Enjoying the journey, and thus failing to reach a destination efficiently, is wrong.

Of course, the only thing that ought to be enjoyed is God. Only the enjoyment of God can bring real happiness. I suppose that God is the one and only "P" for a legitimate desire that P. It is the one legitimate end - all else being just means.

Now, what about all of these other things people desire? What about . . . other people? Are other people just to be used?

Well, Augustine tells us that God alone is to be enjoyed. This suggests that everything that is not God - including people - is there only to be used.

We have been commanded, after all, to love one another; but the question is whether human beings are to be loved by human beings for their own sake, or for the sake of something else. If it is for their own sake, then they are things for us to enjoy; if for the sake of something else, they are for us to use. Now it seems to me that they are to be loved for the sake of something else.

However, we also must note Augustine's claim that there is a proper use and an improper use to be made of all things, and that an improper use will hinder us in obtaining what we ultimately desire - what deserves to be desired.

So, even though other people are to be used, we still must answer the question, "what is the proper way to use of other people?"

There is no special reason to focus on how we ought to properly use "other people". If the only proper end is God, then this claim that all other things are to be used applies to oneself as well.

So if you ought not to love yourself for your own sake, but for the sake of the one to whom your love is most rightly directed as its end, other people must not take offense if you also love them for God’s sake and not their own.

The place to start is to look at how we ought to properly use ourself. Because, according to Christian doctrine, we should love others as ourselves. So, we should use others as we would use ourselves in obtaining that which has ultimate value.

So, even though other people are to be used, the proper use of other people is not to cause them harm, to enslave them, or to visit misery and suffering upon them. The proper way to use them is the same as the proper way to use oneself.

Though, actually, even this is not entirely accurate. One is not to use others as one would use oneself, but to use others as one should use oneself - because there are also proper and improper ways to use oneself.

The proper way to love oneself is in a way that is conducive to devoting all of one's thoughts and whole life focused on God. So, the proper way to love others is in a way that is conducive to them devoting the whole of their thoughts and their life to God.

At this point, we come to a fork in the road. Though this is a matter of controversy, I hold that no god exists. The two prongs of this fork, then, are to (1) continue with the analysis of Augustine's beliefs that will determine the relevance of the love of God as the sole proper end, or (2) look for a substitute for God that does exist and continue the analysis in light of that substitute.

I wish to pursue option (2).

Augustine provides an alternative in that he has divided things into things to be used and things to be enjoyed. We may see how far we can go taking enjoyment as the proper end, rather than God. Enjoyment itself may be equated with something like pleasure and the absence of pain, happiness, or eudaimonia - either way, it seems to be something real and something that it seems at least initially plausible can serve as a proper end.

If we make this adjustment, then the proper way to use oneself - and to use others - is in a way that is conducive to enjoyment. The proper way to use others is in a way conducive to their enjoyment. Though this does raise a question. Should it be that my own enjoyment is my proper end, which means that I should use others in a way that is conducive to my own enjoyment? Or is "enjoyment" a proper end, which means that I should use myself and others in ways that are conducive to enjoyment over all?

We could have asked this question about love of God. Should it be the case that I use myself and others in a way conducive to my own love of God? Or should I use myself and others in a way conducive to the love of God generally? Augustine would have used the commandment to love others as oneself to block the first option and aim for the second. Unfortunately, when we got rid of God to take this fork, we also got rid of the foundation for Augustine's argument for treating other people's enjoyment as equivalent to our own. That, now, will also require a different justification.

There is still another move available - a principle of equality.

So now, as there are four kinds of things to be loved: one which is above us, the second which we are ourselves, the third which is on a level with us, the fourth which is beneath us, about the second and the fourth there was no need to give any commandments.

To use others as a means to our own enjoyment would, then, amount to treating that which is equal to us as if it were below us. That may be thought to be objectionable. In fact, it is not. If we look at these enjoyments objectively, there is no reason why the enjoyment of one person shall have more value than the enjoyment of another. There is no reason why another person shall be sacrificed for one's own enjoyment, any more than the enjoyment of that person shall be sacrificed for another. This, then, gives us a reason to use self and others for the sake of all enjoyment equally, and not just for one's own enjoyment.

Yet, this still leaves open the question of what to do about special relationships - the relationships that one has with friends and family. Augustine has an answer for this:

All people are to be loved equally; but since you cannot be of service to everyone, you have to take greater care of those who are more closely joined to you by a turn, so to say, of fortune’s wheel, whether by occasion of place or time, or any other such circumstance.

This, then, leads to the fact that there is nobody closer to me than me. It would seem that this permission to take give a priority to those who are near to me ultimately has to imply that I make myself the first among equals. Augustine says nothing about this. There is nothing on the surface that suggests whether this is right or wrong - but it is an implication of his account as we have followed it so far.

Alternatively, Augustine provided us with an argument that states, even if we are to use somebody else, and we are not to use them for sake of a God that does not exist, using them properly involves using them for the sake of the good.

We may be able to draw this from what Augustine says about the way in which God uses us, and apply this to a theory about how we ought to use each other. God does use us, according to Augustine, for only God is the proper object of love for its own sake, and all other things are to loved as a means. However, God uses us in service to His goodness. To use us in service to His goodness is to benefit us. And so it may be said (though clearly this falls short of an implication), for us to use another person in Augustine sense is for us to use them in the service of goodness as well, which we do by benefiting them.

The only change we are making here is changing the use of something from being for the sake of His goodness but for the sake of goodness itself. To use somebody for the sake of goodness is to benefit that person.

Even without this, we still have the issue, mentioned earlier, that to use another person properly is to use them in service to the good, and to use another person in service to the good is to benefit them. So, even though other people are not an end, behavior towards them in service to the proper end is to benefit them.

At this point, I would have an objection that there is no intrinsic value - no "proper end" that has this property in virtue of its intrinsic properties. There is nothing that is "good in itself", but only good insofar as it can serve a desire. But, then, the purpose of this exercise is not to derive something true in ethics. It is to understand how other people might view things - putting their positions in the best possible light. This is a fairly strong view of ethics.

Thursday, August 30, 2018

Testimony 011: Fricker on Trustworthyness

Gad, I hate it when I write this wonderful post and I lose it due to my own carelessness.

I always try to recreate the beautiful text, stunning illustrations, and air-tight arguments that I know were in the original version of the post. But, alas, I cannot do so. It is a frustration to even try. The right and proper thing to do is to start over and to hope that I can create something half as good as I remember. It is a shame, indeed, that the world may have lost such brilliant prose as was contained in that original posting. It certainly is a great loss to humanity. However, there is no god, and the fates are unkind.

The topic of discussion is Elizabeth Fricker's concept of "Trustworthiness" and its role in whether or not one is justified to believe that P on the basis that somebody says that P.

So, for illustrative purposes, let us assume that I have told you that I have read the following article:

Fricker, Elizabeth (1994). "Against Gullibility," in B.K. Matalal and A. Chakrabari (eds.) Knowing from Words, pp. 125-161, Kluwer Academic Publishers.

Are you justified in having a belief that I have read this paper on the basis of me telling you?

I sincerely doubt that you have anything else to base that belief on. I don't think that you are looking over my shoulder, or recording the keystrokes on my computer.

*waves to the FBI agents*

Consequently, if you do now believe that I have read this paper, then you believe it solely on the basis that I have told it to you. Are you justified in having that belief?

So, according to Fricker, you have knowledge of the following:

(1) Agent testifies that P

That is to say, you look at this blog post and you can see that I have said that I have read Fricker's paper.

And we want to get to:

(C) P

That is, your justified belief that I have read Fricker's paper.

For Fricker, this requires what we might call a bridge thesis - another claim of the form:

(2) If Agent testifies that P, then P.

Specifcially, Fricker phrases it as follows:

Trus1: If S were to assert that P on O, then it would be the case that P

Recall, the thesis that I am interested in is whether epistemic justification is a form of moral justification; whether H is epistemically justified in believing that P on the basis of S's testimony can be expressed in terms of H being morally justified in believing that P on the basis of S's testimony.

And, of course, when we throw desirsim into the mix, H is morally justified in believing that P on the basis of S's testimony if and only if a person with good desires and lacking bad desires would have believed P on the basis of S's testimony in those circumstances.

On this measure, it is interesting to note that what Fricker is arguing about is what may well be considered a moral principle:

The thesis under debate is:

[Presumptive Right] Thesis: On any occasion of testimony, the hearer has the epistemic right ot assume, without evidence, that the speaker is trustworthy, i.e, that what she says will be true, unless there are special circumstances which defeat the presumption. (Thus, she has the epistemic right to believe the speaker's assertion, unless usch defeating conditions obtain.)

Note that this follows the same model as the presumption of innocence in criminal trials. The accused is presumed innocent unless proven guilty. The testifer is presumed truth-telling unless proven otherwise. A difference between the two is that the former depends on evidence beyond a reasonable doubt and the latter requires only evidence. However, the structure of the principles are the same. And, whereas the first is a moral principle, the latter may be as well.

Now, it is important to note that Fricker is arguing against this thesis. Yet, the fact that she is arguing against the thesis does not provide an argument that this is not the case of a moral argument. Moral philosophers argue for and against the existence of certain moral principles all the time.

Tuesday, August 28, 2018

A Synthesis of Right and Left: Markets and Wealth Redistribution

I'm putting a copy of something I posted on social media here for safe keeping.

[I]Recently, I sought to introduce some samples of conservative thinking to a community that would rather insult and denigrate than listen.

Let me tell you what I really think.

Here is an issue where traditional Republicans and Democrats each have a portion of the truth, and each mix it with a measure of fiction or fantasy. This is not a compromise "centrist" position. It may, if adopted, represent a compromise between traditional Republicans and Democrats, but it is no compromise with the truth.

From the right:

Price is the best mechanism for efficiently allocating and goods and services. It works a whole lot better than thousands of words of regulations by arrogant people who think that they have the wisdom to control complex systems.

Institutions of control and regulation inevitably attract those who will seek to use that power for their own benefit. Plus, this is easy to do - since the average voter cannot even be bothered to know their representative's name, let alone the rules embedded in a complex regulatory framework. Bureaucracies are slow to respond to change, and often stifle change since change disrupts the system that the regulatory framework has established. Not only does it fail to innovate, it often prohibits innovation as a way of protecting the power and authority of those who have wormed their way into the regulatory system.

Price, on the other hand, has the power to respond instantly to change, and to respond to ways that conform to people's interests. If a particular commodity becomes scarce, price goes up, telling people to immediately give up their more frivolous uses of that commodity so as to preserve it for more important uses. It not only encourages innovation, but encourages it where it is needed most since that is the innovation that will bring the greatest reward. It will seek replacements for or additional ways to obtain those things that (currently) have the most highly valued uses.

From the left:

The power of price to efficiently allocate resources is inversely proportional to the magnitude of wealth inequality in a society. Wealth inequality is what allows those with a lot of money to "bid" goods and services away from those who need it to sustain a minimum quality of life, so that they can use it for trivial purposes - entertainment, or simply to waste because it is too much effort to conserve.

Imagine that food becomes the scarce commodity. Under a price system, the price goes up. This immediately signals people to quit using food for trivial uses and to conserve it for its more valued uses. However, if there is wealth inequality, the poor are forced to give up food even for its most valued uses (sustaining life) well before the wealthy have to consider giving up its use for trivial purposes (decorations, reduced waste).

Synthesis

So, the wise position is that of price allocation with wealth redistribution. This position rejects the practice setting up massive bureaucracies that, at worst, attract malevolent people who will seek to use its power to their own ends and, at first, put decision making in the hands of arrogant bureaucrats who think they have the capacity to control what they (as humans) lack the ability to understand. Instead, it lets the market do what the market does best.

However, we combine this with a system of wealth redistribution - high taxes on the wealthy and some way to give it to those in need.
Some on the conservative side say that this will ruin innovation and will result in there being no wealth to distribute - because why would somebody want to do anything that makes money when that money is simply taken and handed to somebody else?

Well, we know that there are a lot of innovators who like to accumulate wealth precisely because they can do some good with it. They put their wealth to good use - helping those who need it most. Warren Buffett and Bill and Melinda Gates provide good examples.

If such a system removes the incentives to innovate from those who only want to create a corporate fiefdom where they can act as corporate dukes and duchesses ruling over peasant serfs and servants - that is not such a bad thing. In fact, I see it as a plus.
Leave the innovating to those with a desire to do good.

One possible example would be an estate tax that leaves a person a moderate amount to take care of their own concerns (say, $10 million) and requires that the rest go either (1) to a charity that serves those in the greatest need, or (2) to the government as an estate tax, would serve this purpose.

I would rather that people selected option (1). Personally, if I had $100 billion, I would give it to the Bill and Melinda Gates Foundation long before I would approve of the government "taxing" it away so that legislators can hand it out to those who give them the most campaign contributions.

And that is what I really think.[/I]

Epistemic Responsibility 010: Gullibility (Elizabeth Fricker)

School has started, and an initial essay on testimony is perfect for the theme of this series.

Fricker, Elizabeth (1994). "Against Gullibility," in B.K. Matalal and A. Chakrabari (eds.) Knowing from Words, pp. 125-161, Kluwer Academic Publishers.

This article is difficult to understand. Fricker loaded up the front 65% of the article with a lot of preparation work that I found difficult to understand without a hint as to what she was preparing it for. It was like looking at the blueprints for a foundation without any idea of what was going to be built on it.

What she ultimately built on her foundation is the idea that listeners had an obligation to assess the trustworthiness of testifiers. More specifically, if one hears testimony and acquires a belief as a result, this belief is not justified (that is, it fails to count as knowledge) unless the listener has done due diligence in determining whether or not the speaker can be trusted.

Recall that the thesis that I am currently finding attractive says that the justification with respect to a justified belief is a moral justification. An agent's belief that p is epistemically justified if and only if it is morally justified.

I will grant at the start that I am proposing this only tentatively at the moment. I suspect there may be some serious problems with it. However, I have not found them yet.

Put in terms of desirism, this principle would state that a belief is justified if and only if a person with good desires and lacking bad desires would have acquired that belief

Fricker's account fits this model. A person with good desires and lacking bad desires would not accept testimony blindly. Moral responsibility requires that she take a look at the testimony and assess its likely quality.

In earlier postings in this series, I looked at the case of a person who glances at a clock and determines that the time is 11:56. As it turns out, the clock is stopped. However, our agent happened to look at the clock at 11:56 and, on the basis of the clock's testimony, adopted the belief that the time was 11:56. This is a true belief. It is justified by looking at the clock. Therefore, according to the standard view, the agent knows that it is 11:56, even though he acquired the belief by looking at a stopped clock.

I objected to that account on the grounds that the belief was not, in fact, justified. I illustrated this by increasing what was at stake. In my case, the agent was required to perform an action at 12:04 to avoid dire consequences. Such an agent, I argued, had an obligation to determine if the timepiece was reliable, if he had an opportunity to do so. These standards turned out to be exactly the standards that Fricker defended in her article. They amount to an obligation to look for defeaters - evidence that the testifier (in this case, the clock) could be trusted. An agent who does not do this cannot be said to have a justified belief.

It is certainly the case that his claim, "I knew that it was 11:56" would not hold up in a court of law against a charge of negligence. And, yet, if he truly did know, then it should be able to hold up in a court of law.

So, I think I can rely on Fricker's article as providing support to my thesis. Though, now that I have gotten to the end and I know what she was arguing for, I will need to return to the beginning to give an intelligent assessment to the foundation on which she built this conclusion.

Sunday, August 26, 2018

Epistemic Responsibility 009: The Beliefs of Animals

I have renamed this series "Epistemic Responsibility" because that is what I wish to write about.

It is about what it takes for a person to be a morally responsible agent with respect to his or her beliefs. It is about when beliefs are justified in the sense that a person cannot be morally blamed for having that belief, and when a belief is unjustified and moral condemnation is appropriate.

Note that, in the case of belief, I am justifying condemnation, not punishment. There are reasons not to allow punishment for having unjustified beliefs. This ties in to issues such as freedom of speech and freedom of the press. "Unjustified belief" itself ought not to be a crime. However, epistemic recklessness that results in harm to others can be legitimately punished. The person who recklessly points a gun at a person and pulls the trigger, recklessly believing that the gun is not loaded, is guilty of endangerment and may be punished.

So, in reading the Stanford Encyclopedia of Philosophy entry on epistemology, I see this:

Why think that justification is external? To begin with, externalists about justification would point to the fact that animals and small children have knowledge and thus have justified beliefs. But their beliefs can't be justified in the way evidentialists conceive of justification. Therefore, we must conclude that the justification their beliefs enjoy is external: resulting not from the possession of evidence but from origination in reliable processes.

Gad, I do hate philosophical jargon. Let's put this into English.

This is an argument for the thesis that two people can have the same evidence available, yet one person can be justified and another not. This is a threat to the idea that a justified belief is a morally responsible belief, because a person cannot be morally responsible for something they cannot be aware of. The reason this is a threat is because animals and small children can have justified belief, but they lack moral responsibility. They simply do not have the capacities to form morally responsible beliefs. Therefore, a justified belief cannot be a morally responsible belief.

That's it. I'm done. Time to go home.

Well, actually, I need to remember why I selected to go down this road (justified belief = morally responsible belief) in the first place. It is because of an assumption that in the English of the common person (not the jargon of philosophers) we learn one definition of "justification". It is a term of praise (where "unjustified" is a term of condemnation) that we employ to produce good dispositions and habits. In the case of belief, the concept of "justification" aims to produce good doxastic habits. That's philosophereese for "good habits with respect to belief".

It seems that, if I am going to make sense of this part of our world, I am going to have to say that the beliefs of animals and small children are justified, even though animals and small children lack the capacity to act as morally responsible agents.

Let's look on the "moral responsibility" side of this problem.

In morality, we divide things into "permissible" and "impermissible". This is the distinction that I am probably going to want to map to the distinction between "justified" and "unjustified".

Actually, I think this may be clearer if I use a closely related set of terms, "just" and "unjust".

"Believing that p" is not an act. It is a state of affairs. Having a belief that p is like having a scar on one's thumb. It is a state one is in, not something that one is doing. Actions (doings) are "justified" or "unjustified". Whereas states of affairs are "just" and "unjust". Though, from here, we can say that a just state is one that can be justified, and an unjust state is one that cannot be justified. If we apply this to the state of having a belief, we can talk about beliefs that can and cannot be justified just as we talk about other states as states that can and cannot be justified.

This alleged link between belief justification and moral justification is making more sense all the time.

But . . . the animals and small children! We can't forget the animals and small children!

The problem presented to us is that we cannot say that the beliefs of animals and small children are justified because they are not moral agents and cannot be morally responsible.

However, we can say that they are not morally unjustified. People generally do not have much of a reason to get animals and small children out of the habit of trusting their senses when they use their senses to form beliefs. Even if they were full-blown moral agents, we would not need to punish or condemn them for forming beliefs out of their sensory experiences. So, animals and small children are, in fact, forming these beliefs in a way that full-blown morally responsible agents would form them. In this sense, we are justified in saying that they are justified.

So, now, we do have a way of saying that the beliefs of animals and small children can be justified without breaking the link between epistemic justification and moral justification. A belief is justified if it is formed in ways that would not be condemned if a moral agent would have used that method.


As a brief aside, not relevant to the argument above, I deny the proposition. "children and small animals are not moral agents". We praise and condemn children and small animals. We do so because our praise and condemnation act on their reward centers to form new interests and modify existing interests for the better. We reward children and animals for good behavior, and punish them for bad behavior. The range of activities we reward and punish them for, and the form and magnitude of our rewards and punishments may differ, but they exist. Because they exist, the claim that we do not treat animals and small children as moral agents is false.

Saturday, August 25, 2018

Epistemic Responsibility 008: Evidentialism, Reliabilism, Internalism, Externalism

I am going to bore you with some words that aim to describe some of the basic distinctions in personality.

My writing coach told me to start off with a sentence that grabs the reader's attention, hooks, them, makes them ache to see what comes next, rather than turn away to play a computer game or seek a root canal at the local dentistry. This is the best I can do.

Anyways, if you want to speak epistemologisteese, you will need to know about two sets of distinctions.

Internalism/Externalism

According to the Stanford Encyclopedia of Philosophy entry on epistemology.

In contemporary epistemology, there has been an extensive debate on whether justification is internal or external. Internalists claim that it is internal; externalists deny it.

That's helpful, don't you think?

Don't worry about this. We're going to get here through the other distinction.

Evidentialism/Reliabilism

Here, the Stanford Encyclopedia of Philosophy provides a much better explanation - one that I do not think that I can top.

Their explanation concerns two people, Tim and Tim*.

Tim is a real person in the real world who is sitting in a coffee house with her favorite coffee, sitting back, reading her favorite philosophy blog on his tablet. Well, he's not a real real person. He's a fictitious real person. But, in the fictitious world, he's a real person.

Tim* is a brain in a vat. Tim* lives in the multiverse as Tim's counterpart in a parallel universe. Tim* has been vatted. That is, a secret cabal kidnapped Tim*, removed his brain, put it in brain-sustaining fluid, and hooked it up to a computer that would feet it neural impulses that would be processed as sights, sounds, and even perceived movements.

While Tim's brain is going through whatever states it would go through as Tim reads his favorite philosophy blog while sipping coffee in a coffee shop, Tim*s brain is going through the same states, making Tim* believe that he is reading his favorite philosophy blog while sipping the same type of coffee in the same type of coffee shop.

Now, let's look at these distinctions.

Is it true that if Tim's belief that he is reading his favorite philosophy blog is justified, then Tim*'s belief that he is reading his favorite philosophy blog is justified. Both people have exactly the same evidence available to them. There is no test that Tim can perform to determine that he is not a brain in a vat, nor is there an experiment that Tim* can perform to determine that he is. They both see, hear, feel, and remember the same things. Their brains are identical. So, how can one have a justified belief and the other not have a justified belief?

Spoiler alert: I think they can't.

That makes me an internalist/evidentialist. All of the evidence that one needs to determine if a belief is justified is inside the brain. That's internalism. Get it? Clever, huh? Everything that is going on in Tim's brain is matched in Tim*'s brain. They both have the same evidence available. If one agent justifies his beliefs, so does the other.

So, thinking that Tim's beliefs are justified when Tim*'s beliefs are justified gives you an example of internalism (everything required for justification is inside the brain) and evidentialism (if the evidence is the same, the justification is the same).

Remember, a justified belief can be false.

The other half of these distinctions is reliabilism/externalism.

Let's take reliabilism first. Reliabilism says that a belief is justified is if is brought about by a reliable source - a source that is likely to produce true results. When it comes to reliability, Tim and Tim* face two different situations. Tim's experiences are reliable. He sees himself in a coffee shop because he is in a coffee shop. He tastes the sweetness of the coffee because he is drinking heavily sweetened coffee. Tim*, on the other hand, has no reliable source of his beliefs. Almost all of his sensations give him false beliefs. Because of this, Tim can have justified beliefs, and Tim* cannot. The reason that Tim* is not getting justified belief is because some aspect of justification is beyond that for which he has evidence. He can't possibly know whether he is a brain in a vat, but that determines whether his beliefs are justified or unjustified anyway.

So, the idea that Tim's beliefs can be justified while Tim*'s beliefs are unjustified gives you an example of reliabilism (the justification of a belief depends on its being grounded on something that is reliably true) and externalism (whether something is reliably true can be external to anything the agent can become aware of).

So, think of coffee-shop Tim and vatted Tim*. If Tim's beliefs are justified if any only if Tim*'s beliefs are justified, chances are you are an internalist and an evidentialist. Everything required for justification is going on inside the brain, and identical evidence means identical justification. If Tim's beliefs can be justified when Tim*'s beliefs are not, this provides an example of externalism and reliabilism. The justification for a belief can depend on something external to what the agent can become aware of, where the reliability of that which provides evidence for a belief counting as one of those possible external factors.

So, if you are trying to understand things in epistemology, this should help.

Epistemic Responsibility 007: Deontic Justification of Belief

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

Abraham Maslow, The Psychology of Science, 1966, page 15

So, here I sit with this theory of morality called "desirism", and suddenly I want to use it, not only for morality, but for epistemology.

I can hear the voices in my head saying, "Keep it up, Alonzo. Soon you'll be insisting that it cures male pattern baldness and guarantee a good harvest."

Look, it is not as far fetched as all of that.

First, we need an ethics of belief. We have many and strong reason to promote epistemic habits that will bring about true beliefs. That will allow us to avoid the harms that will come from people making stupid but avoidable mistakes - such as doubting the existence of human-caused climate change based on the nonsense put out by those who profit from activities that cause climate change. Without an ethics of belief, we are vulnerable to charlatans enriching themselves with fake cures to non-existing diseases and calling a 15-month investigation that brought over 30 indictments, 5 guilty pleas, and 1 conviction a "witch hunt" while chants of a 15 month investigation into the use of a private email server that brought no indictments is met with chants of "lock her up."

And it makes perfectly good sense to say that a moral system like desirism might have implications for an ethics of belief - an ethics of activities that can either bring about or avoid these absurd states of the world.

Second, I didn't invent the English Language. I didn't get a late night phone call from somebody saying, "Hey, I got this wonderful idea. We will use same word, "justify", to refer both to good and bad actions and good and bad beliefs." Nor do I think that it is absurd to think that, maybe, this happened for a reason. Perhaps it isn't actually a coincidence that people migrated toward using the same term in both cases. That, instead, people not only use the same term in those cases but that they have near to the same meaning and, more importantly, the same function. That function would be to use the embedded sense of praise and condemnation to mold character traits in order to better bring about good actions in the one case, and good beliefs in the other.

Indeed, it might even not be a coincidence that, as our community migrates further and further away from an ethics of belief - further and further away from a system whereby people are actually praised for having justified beliefs and condemned for unjustified beliefs - that we end up with something like Donald Trump and his followers, and multi-billion dollar corporations manipulating the public into deals like, "You give us $100 billion, and poison your water supply and your air, destroy your property, and impoverish your children and grand children. What do you say? Do you have a deal? No, don't go talk to your wife about it. What are you, a child needing permission? Be a man. Sign right here. Your grandchildren will love you for it."

As it turns out, the Stanford Encyclopedia of Philosophy entry on epistemology talks about deontological justification and whether it is suitable as a form of epistemic justification. It mentions two objections.

Objection 1:

First, it has been argued that DJ presupposes that we can have a sufficiently high degree of control over our beliefs. But beliefs are akin not to actions but rather things such as digestive processes, sneezes, or involuntary blinkings of the eye.

This is not right.

Sneezes and blinkings are still doings (even if they are not actions). They involve something happening. In contrast, having a belief that p is not a doing, it is a state of affairs. It is a fact about how the world is structured. Having a belief is like having a scar on one's thumb or having a blood pressure of 102/58.

The only type of control we need over our beliefs, in this sense, is that they be the results of actions and habits that, in turn, we can influence through praise and condemnation. Having a belief that p is, indeed, a state that we can influence by influencing the dispositions and habits that lead to belief formation through praise and condemnation.

To call a belief "unjustified" is to say that the agent did not use those techniques that an epistemically responsible person would have used. A belief is "justified" if it is a belief that an empistemically responsible person would have acquired in those circumstances.

There is no problem of "control over our beliefs" that rules out a deontological version of epistemological justification. (That is to say, we have exactly the type of control we need to hold people morally responsible for their beliefs.)

Objection 2:

According to the second objection to DJ, deontological justification does not tend to ‘epistemize’ true beliefs: it does not tend to make them non-accidentally true. This claim is typically supported by describing cases involving either a benighted, culturally isolated society or subjects who are cognitively deficient.

Here, I need to see an actual case, and the Stanford Encyclopedia of Philosophy does not even describe one.

I do not see how the "isolated culture" version of this objection would go at all. After all, deontological justification is not sulturally determined. situation would work at all. After all, in the same way that slavery can be wrong even though an isolated culture does not recognize its wrongness, a habit of belief formation can be bad even though a culture does not realize its badness.

For the cognitively defective individual, I imagine a case such as one where a person has a brain defect that gives him a belief that p, where p happens to be true. He cannot be condemned for having the belief so it is justified. It is true. And it is a belief. Therefore, it counts as knowledge, even though all he has is a brain defect. This can't really be knowledge.

Well, the belief can't actually be a state that is a matter of epistemic or deontological assessment at all. This would be like making a deontological assessment of water being made up of H20. Nobody made it the case - it just happened. Because it just happened, it is not subject to deontological assessment as something that was justifiably - or unjustifiably - brought about.

We actually have something similar on the desire side. The aversion to pain is amoral. It provides a person with a reason for action, but this reason is, itself, neither justified or unjustified. It merely exists. To the degree that we can influence it makes sense to ask whether we ought to. However, the state of experiencing a pain that one is averse to is subject to moral judgment only insofar as we can develop different habits and dispositions.

So, I do not see how the two objections to a deontological theory of epistemic justification (a duty of beliefs) that appear in the Stanford Encyclopedia of Philosophy has merit.

Friday, August 24, 2018

Epistemic Responsibility 006: Accidental Knowledge

The following case may seem to cast doubt on the thesis that whether a belief is justified or unjustified is determined by whether the agent is to be credited or blamed for having that belief.

Suppose that the clock on campus (which keeps accurate time and is well maintained) stopped working at 11:56pm last night, and has yet to be repaired. On my way to my noon class, exactly twelve hours later, I glance at the clock and form the belief that the time is 11:56. My belief is true, of course, since the time is indeed 11:56. And my belief is justified, as I have no reason to doubt that the clock is working, and I cannot be blamed for basing beliefs about the time on what the clock says. Nonetheless, it seems evident that I do not know that the time is 11:56. After all, if I had walked past the clock a bit earlier or a bit later, I would have ended up with a false belief rather than a true one.

This example is taken from the Internet Encyclopedia of Philosophy entry on Epistemology

In standard epistemology, this type of case is used to question the idea that justified, true belief counts as knowledge. The agent, looking at the stopped clock, is said to have a justified, true belief at it is 11:56. However, he does not know that it is 11:56. Thus, justified, true belief cannot count as knowledge.

But, is the belief "justified"?

The individual's belief may be justified for practical, everybody purposes (to make sure that he makes a noon appointment, for example). However, if we take the moral sense of justification seriously, his belief is not justified. That is to say, one of the criteria of knowledge has not been met, and this is why the agent does not know that the time is 11:56.

To see this, let us imagine that our agent (call him Marty) needed drive a DeLoreon car equipped with a "flux capacitor" past the clock tower when lightning struck at precisely 12:04 in order to travel in time back to the future. In setting up for this crucial engagement, Marty should recognize that he had an obligation to have a reliable timepiece available that would tell him exactly when to press the gas to get him to the right location next to the courthouse at 12:04 (travelling at 88 mph). Looking at the clock once at 11:56 and judging it reliable would not have been good enough. That is to say, the belief that the clock was a reliable indicator of the time (and, thus, the belief that it was 11:56) had not been justified in the "praise and condemnation" sense of justification. Marty does not "know" that it is 11:56 precisely because the justification condition had not been met.

If we assume that Marty could not check whether the clock was accurate, one could say (in virtue of the principle of "ought" implies "can") that, in looking at the clock and judging that the time was 11:56, he had met his obligation and, that, his belief was justified in the moral culpability sense.

obligation to do so on the basis of "ought implies can," the fact that he should have done so if a means was available is still sufficient to show that he does not know that the clock is reliable.

11:56 and judging that the clock will serve as a reliable instrument for telling him when it was 12:00 (plus or minus 10 seconds) would not absolve the agent of culpability does not justify his belief that the clock will serve as a reliable indicator of when 12:00 gets here. He would still have been held morally culpable for failure to press the button on time. Furthermore, the reason for his moral culpability is his unjustified belief that the clock was reliable.

The implication here is that the agent, in glancing up at the clock and forming the belief that it was 11:56 did not have a justified belief that it was 11:56. However, he did not need a justified belief. We are permitted to be a bit reckless when there is relatively little at stake. The standards go up as the risks go up. The low risk in this case permitted standards that fell below those of justification. The standards applicable to preventing a nuclear warhead from going off are much higher. "To know" is a pretty high standard - a standard that requires a level of justification that would be required in a state of high risk.

But am I not now saying that we do not know as much as I think I do? It seems very few of my beliefs come up to that level of justification.

Though, actually, if "justified" and "unjustified" track praise and condemnation, it is useful to note that the latter is variable. The conditions for believing that my flipping a switch is safe if it is a switch in my own bedroom when compared to a switch on a console in a nuclear power facility that I know nothing about. There is a sense in which, in my day-to-day life - I "know well enough" that it is 11:56 by looking at a clock on campus, but do not "know well enough" for higher stakes.

Here, now, we are not talking about anything of substance. Now, we are recognizing that our sense of the term "to know" is really not all that fixed. It is fluid - precise enough for everyday use but simply not built to standards of precision. There is a "to know" associated with mundane levels of risk and a "to know" associated with higher levels of risk. Plus there is a "to know" that is fluid, that evaluates whether the agent's epistemic behavior is appropriate for the current level of risk, whatever that may be, rising or falling as the situation demands.

So, does the agent in the campus clock case "know" that it is 11:56? It depends on which sense of "know" we are appealing to at the moment.

Epistemic Responsibility 005: Morality, Justification, and Belief

Hullo reader.

What can I do for you today?

Over the summer my attitude towards this blog has been drifting a bit to one which addressed the (potential) reader rather than some topic or concern.

One of the reasons for this is my experiences back in college.

Many academic philosophers (and many topics of study in academic philosophy) appear to be locked in an isolated chamber where they talk to themselves in a private language that is only loosely related to reality.

You, on the other hand, are living in reality.

I think that reality is important.

This sense has grown stronger as I prepare for the semester. The epistemology course in particular appears to be one where philosophers have lost touch and are off in their private world.

I really don’t want to follow them there - I am not interested.

Here’s an example.

In real people language, “justified” and “unjustified” contain an element of praise and condemnation. People use the "justified" as a term of praise - in order to encourage the behaviors and interests that brought about that which is justified. Similarly, the term "unjustified" refers to that which was brought about by behaviors and interests people have reason to discourage (or aversions people have reason to encourage).

I do not think these elements of praise and condemnation disappear when we get into the subject of belief. I think that beliefs can be brought about by good behaviors and dispositions and warrant the praise of being called a "justified" belief, and they can be brought about as a result of behaviors and interests that justify using the term "unjustified" as a term of condemnation.

It would be a significant loss to give this up. There certainly are epistemic behaviors and interests people generally have many and strong reasons to promote universally - and those that they have reason to condemn or inhibit. So, we have a use of a term that plays this role. If we do not use the terms "justified" and "unjustified" to play this role, what other term should we use in its place? And how do we go about promoting its common use so that we can harvest these benefits?

The easiest course of action is to continue to use the terms "justified" and "unjustified" in its traditional role as terms of praise and condemnation.

This is what I want to encourage people to do here.

There is still going to be a strong relationship to being justified and being true. After all, this is one of the interests that a properly motivated person would have - an interest in true beliefs. This would manifest itself as a search for signs of unreliability or for potential objections - for things that suggest that the belief is false so that the responsible agent can correct it. Indeed, the fact that a belief is false is a prima facie sign that the person who believes it is irresponsible, just as the fact that a person was killed is a prima facie sign that the person who brought about the death was reckless. Perhaps this assumption can be ruled out by further evidence, but it is legitimate to say, "Things look bad for you. What do you have to say for yourself?"

At the same time, there is no necessary connection. A person can be on their best epistemic behavior and still embrace a false belief simply because the best available evidence that a responsible person would have sought happens to support this false belief. terms to promote that which is good, and avoid that which is bad. In the real of belief (justified and unjustified belief), the term refers mostly to beliefs that come from good belief-forming habits and dispositions. Beliefs that come from bad epistemic habits and dispositions are unjustified.

Again, on this model, the situation with respect to justified and unjustified belief would be the same as the situation with respect to justified and unjustified action. Indeed, a person with a poorly founded belief can have others legitimately call it "unjustified" in both the epistemic and the moral sense at once.

This thought needs more development, but it is what seems right at the moment.

Still, this is my message for today. "Justified belief" and "unjustified belief" are statements of praise and condemnation that aim at developing and promoting good epistemic habits.

Anybody see a problem with this?

Thursday, August 23, 2018

Epistemic Responsibility 004: What Is Audi's Theory of Knowledge?

Audi, Robert (2006), "Testimony, Credulity, and Veracity", in Lackey, Jennifer, and Ernest Sosa (eds.), The Epistemology of Testimony. Oxford: Oxford University Press.

What I learned today: There is a theory of epistemology called "reliabilism" which holds that the justification criterion of knowledge depends on the belief coming from a source that is reliable. I had known of this in the past. However, because my study of epistemology has been mostly to skim the surface on my way to studying ethics, I have not looked at it in detail. In this posting I discuss my struggles trying to understand Audi's article, but I eventually get to the realization that Audi is one of these reliabilists. One of the problems I am going to have with reliabilist epistemology is trying to square it with agent culpability. I discuss those problems towards the end.


I actually had to go to a source outside of Robert Audi's article to try to figure out what he was saying. Audi is a crappy writer.

The introduction to this anthology says:

Audi then compares testimonial knowledge with testimonial justification, arguing that in order for a hearer to acquire testimonial knowledge that p, the speaker from whom it was acquired must also know that p.

That's better. I can understand that.

And, what I have to say to that is . . . Hogwash. Bull pucky. Doe snot.

No wonder it was so hard to understand. A horrible writer with a horrible idea.

Folks might not be accustomed to me being so hard on somebody. Though, being forced to waste valuable hours trying to decipher an article written by a person who thinks that each chapter should be its on paragraph and each paragraph its own sentence does not set well.

Somebody should run him over with an audi, if only for the irony of it all.

Fine! Okay. I'm calm.

Besides, some of the problems I had with the article was due to my own ignorance of epistemology. I can't blame Audi for that. I guess this is why I am trying to study this stuff.

I uncovered this deficiency of mine by being able to uncover an argument that lead to one of two possible conclusions. Either Audi was saying something that could easily be proved to be a contradiction, or I did not know what I was talking about. It turned out to be the latter.

The easily proved contradiction goes as follows:

Knowledge is justified, true, non-Gettierized belief. Don't worry about the Gettier issue - it concerns a rare type of case where the reason that a belief is justified has nothing to do with (or is only accidentally related to) the reason that it is true. We do not need to worry about this special case, so set it aside. I only mention it so as to show off how smart I am.

Knowledge is justified true belief.

Audi states that a person can gain justified belief from a testifier whose belief is not justified.

Of course, if the testifier's belief is not justified, then the testifier's belief does not count as knowledge (since knowledge is justified true belief).

So, the testifier does not know that p

However, the testifier can give the listener a justified belief that p that happens to be true.

But a justified belief that happens to be true is knowledge.

So, a person who does not know something can, through testimony, cause somebody else to have a justified true belief (knowledge).

Yet, Audi said that an agent cannot transfer knowledge to another person unless the agent knows that information himself.

Thus, we have an easily proved contradiction.

This, then, leads to two possible conclusions. Audi is an idiot without even a basic understanding of epistemology, or I am missing something.

I still have to rule out option 2 before I can declare that option 1 is the case. To do this, I looked up Epistemology in the Stanford Encyclopedia of Philosophy.

What I discover is a theory of knowledge called "reliabilism".

[Non-traditional theories of knowledge], on the other hand, conceives of the role of justification differently. Its job is to ensure that S's belief has a high objective probability of truth and therefore, if true, is not true merely because of luck. One prominent idea is that this is accomplished if, and only if, a belief originates in reliable cognitive processes or faculties. This view is known as reliabilism.

But, I am not certain yet whether Audi is a relabilist. More work is required.

Moral Education

In a recent discussion, I was directed to look at Matt Dillahunty’s claims about morality.

I would like to explain quickly what Dillahunty gets right and wrong on morality.

Dillahunty tells a story of two children who misbehave. We may assume that they are identical twins. One is then threatened with punishment if he should misbehave in the future. The other is sat down as the parents explain the wrongness of his action, asks, “How would you like it if somebody does that to you?” in order to draw on the child’s empathy so as to promote a wrongness in those types of actions.

He concludes that both children will be equally behaved. However, the second child will be a morally better person because he acts for better reasons.

What is true: The second child will be a morally better person because he acts for better reasons.

What is false:

(1) The two children are both likely to behave the same.

Take these identical twins as adults and hire them at a bank. Put them in charge of transferring funds. Hook one up to a device that inflicts an extremely painful shock each time he transfers money into a wrong account. Explain to the other the hardships that may result from doing so. I would trust Person 1 to pay a great deal more attention to detail.

This is a minor point.

(2) The second child acquired his moral attitudes as a consequence of reason.

Dillahunty is ignoring the fact that this lecture is filled with praise for those who refrain from the type of behavior in question, and condemnation of those who do not. This praise and condemnation are not processed through the faculties of reason. They are processed through the reward system.

The factual statements contained in the lecture may accurately describe why people generally have reason to praise or condemn those who perform the act-type in question under those circumstances. However, the factual premises neither entail nor cause the change in sentiment. It is the praise and condemnation wrapped around the factual statements that have this effect.

The reward system takes rewards and punishment and uses it to encode new rules of behavior. Praise is a type of reward; condemnation is a type of punishment. In part, the second child behaves for the same reason the first child behaves (to avoid future punishment, in the form of condemnation). In part, the lecture, as condemnation, encodes a new behavioral rule - an aversion to doing that which resulted in condemnation.

Better Reasons

Of the two types of reasons, the internalized reason (the acquired desire or aversion) is better than the incentive/deterrence value of rewards/punishments.

Primarily, the incentive/deterrence reason vanishes when the agent’s behavior is unobserved. He can neither be rewarded or punished for that which is kept secret. However, the internalized norm - the desire or aversion acquired through activation of the reward system - remains in play even when the agent is unobserved. Because if this, we have reasons to prefer the second type of motive to the first. Which means, we have reasons to praise some conduct and condemn others. The internalized reason is, indeed, the better reason.

Dillahunty simply fails to understand how these better reasons are taught and learned.

Wednesday, August 22, 2018

Epistemic Responsibility 003: Reckless Belief and Access to Information

Audi, Robert (2006), "Testimony, Credulity, and Veracity", in Lackey, Jennifer, and Ernest Sosa (eds.), The Epistemology of Testimony. Oxford: Oxford University Press.

This examination into the principles of the legitimacy of believing something based on testimony is going to involve a lot of intuition-mining. We'll encounter a lot of stories and we will be asked to conclude whether the consequence is a justified belief based on testimony.

I dislike this method of philosophizing. We invent our language to use in regular day-to-day circumstances of normal life. Our concepts are not meant for the bizarre situation philosophers can imagine. I tend to support an instrumental view of language. Language is a tool. Its specific structure was selected substantially for its usefulness. Terms that are useful in the situations we regularly find ourselves in are not always useful in situations we never have to deal with.

Be that as it may, our first story concerns a teacher (Luke) who believes that evolution is false, yet teaches his students the facts of evolution - the fossil record, the methods of natural selection. He tells them that humans began in Africa and migrated out.

Have the students learned - do they know 0 that humans began in Africa and migrated out?

A part of my own intuition is that, if two teachers (Luke, Biggs) give identical lectures about evolution, and no student has access to the fact that one person believes the information and the other does not, then both sets of students acquire the same amount of knowledge. One cannot hold a person responsible for information they have no access to.

As long as we are telling stories, let us look at the case of Judith. Judith is on trial for negligence. She pressed a button that caused a trolley to runaway where it would have killed five people if not for the fact that a utilitarian pushed a fat man off of a bridge and stopped the train, only to precipitate a national dispute that resulted in massive quantities of angst, stress, and disutility.

Judith reports that she pressed the button because her coworker, Phillipa, told her that pressing the button would dispense an ice cold soft drink, and Judith was hot and thirsty. in one case, Phillipa believed what she told Judith, and in another case she did not. At her trial, the question of whether Phillipa believed her statement would be held to be entirely irrelevant. It is not at all relevant to the credence that Judith gives to the belief that pressing the button would yield an ice cold soda.

So, here is one of my principles: There is no moral implications for the credence that a person assigns to a belief based on information she has no access to. If the difference between stories is based on some hidden fact, then the intuitions about what the agent knows or does not know should yield the same results in each case.

Audi does not actually render a verdict in this case. He goes on to other cases such as:

Luke teaches the theory of evolution, but would have taught a false theory if the school had told him to. In other words, he taught what the school told him to teach.

In discussing this example, Audi brings up the idea that Luke is not a reliable transmitter of information.

Again, the relevant point as far as I am concerned is whether, and to what degree, the students have access to this information. If the students get exactly the same information in exactly the same way in two different circumstances, then the two sets of students are justified in assigning the same credence to their lessons.

We are going to be spending a lot of time in these posts discussing testifier prescriptions and subject (listener/reader) prescriptions. The point I will be making is that there is no burden based on the subject in virtue of information she has no access to. This type of information is irrelevant to subject assessment.

I am having a hard time figuring out what Audi wants to say about these cases. He identifies the questions, but does not seem to be giving answers. I am going to have to put some more care into reading these.

Epistemic Responsibility 002: Reductionism and Non-Reductionism

I should start off with a few words about a major distinction in epistemology - a distinction between reductionism and non-reductionism.

The original champion of reductionism is David Hume. Generally, I speak favorably of Hume, though in this matter his account might have a few problems.

Reductionism holds that justifiably believing what other people say can be reduced to other forms of justification such as induction or perception (or some combination of basic forms of learning). In deciding whether you can trust what I say, you must start by having some idea of whether I am a trustworthy sayer (sayerist?) - whether or not I take care to make sure that what I report is true. You need to determine if I have any idea of what I am talking about because, even if I am honest, if I honestly assert things that are not true, then you would be wise not to believe me.

Non-reductionism holds that justifiably believing what other people say is a basic form of knowledge acquisition that cannot be reduced to other types. Thomas Reid is the historical champion of this view. The favorite model for this view is the learning of children. When you were five years old, were you sitting there assessing whether or not your parents were reliable reporters of the truth or have a habit of generally knowing what they were talking about? Of course not. You assumed this. You were not accepting their testimony on the basis of complex inferences and deductions. Their word was, for you, a basic, fundamental source of information against which other sources could be tested.

The first article in the anthology I am covering is:

Audi, Robert (2006), "Testimony, Credulity, and Veracity", in Lackey, Jennifer, and Ernest Sosa (eds.), The Epistemology of Testimony. Oxford: Oxford University Press.

Audi presents a view that follows closely the views of Thomas Reid - the archtypical non-reductionist. And, after providing all sorts of arguments showing that Reid has the best theory, he pulls the rug out from under him and declares his allegiance with the reductionists.

Just to be clear, in talking about testimony, I say something, and you believe what I said because I said it. For example, I say something like, "Audi presents a view that follows closely the views of Thomas Reid." Or, better yet, my claim that this blog posting is based on Audi's article that appears in Lackey's and Sosa's anthology. I suspect that you are going to take my word for this - and assign this belief a high degree of credence, even though your only evidence for this proposition is the fact that I said it.

That's not to say that you trust me completely as if this is some type of religious scripture. That type of trust would have certain advantages (for me), but I don't think it will (should) happen. I am simply saying that you will assign the claim a statement of "true" if asked on a test, and expect to get credit for putting down the correct answer.

Audi also states:

It would be a mistake to think that some conscious activity of interpretation is generally required for testimony-based knowledge. Typically, we simply understand stand what is said and believe it.

This is a statement that I have some questions about.

When you take my statement and decide to agree or disagree with it, you have to understand what it says, right? If I had written the same sentence in a foreign language (e.g., Egyptian hieroglyphs), you would neither agree nor disagree. In order to interpret the squiggles that I put on your screen (or, if you printed this, your paper; or, if you are listening to this, the sounds emitted from your text to speech program), you have to have some sort of system built up for converting those squiggles and sounds into something having propositional content.

Understanding testimony is a whole lot different from opening your eyes, seeing a tree, and believing that there is a tree. Interpreting a sentence takes a level of mental activity that most animals - animals who can see trees and jump from branch to branch - cannot even begin to deal with.

As I see it, the set of skills involved in interpreting provide a great deal of room for moral evaluation that mere perception does not have. People can do this job of interpreting well or poorly - and some do it very poorly indeed. In fact, there seems to be a cultural tradition of assigning the worst possible interpretation of what others say so that one can deliver blistering and denigrating comments to them in the comments section of social media. This "moral quality of interpretation" is going to come up again . . . and again . . . and again . . . in this series.

Yet, we can ask whether this gives us a reason to deny Audi's claim. As you read this posting, I am assuming that you will have little trouble interpreting the words. Some trouble . . . particularly where I cut and paste and edit and leave sentence fragments behind. But you will process most sentences without conscious decision making. Indeed, the quality of my writing can be measured by my effectiveness at producing statements you don't have to ponder about overly long.

If you go back to the original definitions of "reductionism" versus "non-reductionism", the question is whether this task of interpretation means that testimony can be reduced to being inferential, or remains non-inferential.

Audi says, "No.".

Actually, he says:

Even where one must think about what is said and laboriously interpret it, as with a complex message or an utterance by certain non-fluent non-native speakers, ers, it does not follow that one's belief finally arising from accepting the message one discerns is inferential.

In the words of the dread pirate Barbarossa, "That means no."

(Yes, I know that I am mixing up my pirate movies. It's my blog.)

However, I am not seeing much of an argument in defense of this position. Perhaps later.

One thing we can say is that "testimonial knowledge is not inferential" and "testimonial knowledge is skill-based leaving room for moral credit and blame" are not, necessarily, incompatible. We will have to look at this later.

For my purposes, this interpretation provides a place for moral assessment. You can't blame a person for opening his eyes, seeing a tree, and believing that he sees a tree. You can blame a person for poorly interpreting a statement. This means that individuals have a moral obligation to (a good person would develop his skills so as to be able to) interpret statements well.

Epistemic Responsibility 001: Introduction

I am beginning a new series in the discussion of testimony - one of the classes that I am taking this semester.

My interest in this case has to do with . . . I need it to complete my course requirements for my degree.

Actually, my interest goes a bit beyond this. I have written a bit about epistemic negligence and the view that people can be morally culpable for what they believe or fail to believe. In other words, if you do not believe the right things for the right reasons, you are a contemptible creature not fit for human civilization. No, you do not have a moral permission to believe what you want. Do I have a moral permission to believe that you ought to be chopped into little pieces with a machete? Do I have a right to believe that I can drive home safely after my four quick beers?

You might try to answer, "Sure, as long as you don't act on those beliefs?"

Barring the fact that having a belief that one does not act on is a violation of the laws of physics, what about if I believed that I may go ahead and act on these beliefs. Do I have a moral permission to do that? And if you answer that I have a moral permission but you have a moral permission to interfere violently with my acting on this belief, you know nothing about the meaning of the phrase "moral permission."

Reckless believing is as much a moral crime as reckless driving, or recklessly waving a gun around shooting it randomly. You're going to kill somebody if you keep that up and the people who you put at risk at being killed have every right to violently interfere with your reckless behavior - in self defense, at least.

Given this attitude towards reckless belief, this course, I hope, will give me reason to look in more detail at what I might want to call reckless belief.

This is a course on epistemology (theory of knowledge), not moral philosophy (theory of right and wrong). So, there may be a worry that my interest in this general subject will not fit the specific subject matter. However, I hold that there is a strong connection between the different types of justification. After all, justification has to do with "reasons for . . . " doing something. Reasons for performing an act, reasons for praise and condemnation, and reasons for assigning credence to a belief. So, you can look to this series on testimony to find my thoughts on reckless belief.

I am thinking that "reasons for" holding a person in contempt for reckless believing is going to fit in quite well with this subject matter.

I have not actually seen a syllabus yet. However, I know what readings will be assigned, and one of them is an anthology of essays that will cover various theories in epistemology. This seems to be a good place to start.

Those readings are found in:

Lackey, Jennifer, and Ernest Sosa (eds.), 2006, The Epistemology of Testimony. Oxford: Oxford University Press.

Tuesday, August 21, 2018

Fall 2018

The new semester will be starting soon.

I am taking three courses . . . and, this time, I want to write down more comments on my courses so that I can have the notes available. I am assuming each course will require a paper, so I am going to need to consider paper topics.

Philosophy of the Middle Ages

This course concerns philosophy from about 700 AD to about 1200 AD (I assume). Philosophy during this period was interesting. Europe was in its dark ages following the fall of the Roman empire. However, a bunch of Arabs got hold of some ancient Greek texts and began translating them into Arab and Persian. The Muslim religion at the time was extremely tolerant. Christians, Jews, and Muslims all formed a fairly stable society. This allowed for the creation of schools that took ancient Greek text and translated them into Arabic. This was a springboard for several hundred years of dedicated philosophical thinking and writing.

With the Crusades of the 1100s, many of these texts made it back to Europe, with a rebirth of European (Latin) philosophy. I think it is quite important to have an understanding of ways of thinking other than one's own. I am looking forward to an opportunity to better understand these people.

Epistemology of Testimony

Let us assume, for the sake of argument, that I give you a description of philosophy during the Middle Ages. I describe it as a time when the Europeans were in a dark age as philosophy thrived in Arabic communities. Then, after a few hundred years, philosophy returned to Europe. To what degree are you justified in believing me? To what degree does reading and accepting the information I put on the page count as knowledge on your part? What are the standards for answering these questions? I think that I have a paper topic figured out for this class. I have seen an analogy used of a bucket brigade, where a bucket of water gets handed down the line from one person to another. The metaphor is that of a belief that gets handed from one person to another. The belief, with all of its justification gets handed from person to person.

However, there seems to be something missing from the discussion - at least from what I have read so far. In order to understand testimony I have to interpret it. In order to interpret it, I need to follow the norms of interpretation. One of those norms in the Principle of Charity which says that I am supposed to give that which I read and hear the best interpretation possible. The best interpretation possible is the interpretation that I have the most and strongest reason to accept as accurate. When that happens, what are the standards of knowledge for information that I, to at least some degree, helped to shape? Unlike a bucket which gets handed from person to person as it is, in the case of passing a belief, the recipient gets to decide what is being handed to him, and THEN answer the question of whether or not to accept it.

Political Philosophy

This course mostly concerns international politics. This will give me an opportunity to address a question that I have wondered about. Two questions, actually. One question is the question of secession and unification. History is filled with peoples coming together into larger nations and breaking apart into individual nations. The coming together often - but not always - happens under the threat of violence (empire building). So does the dividing up. One of my interests is in the principles of uniting or dividing nations.

This, however, is a part of another question I have. In international law, there is a great deal of legislating without representation. the United States passes laws that impact people in other countries. However, those people in those other countries do not get a vote. This seems to go against the very principle of democracy. We can make the case that when one group of people makes decisions that impact another without giving that other a say in the making of those laws, that this is a type of tyranny. The world over is governed by a type of tyranny. This hardly seems correct.

Conclusion

So, this is where my thoughts will be going this semester. I will keep you informed as to my progress.

Monday, August 20, 2018

Proving a Moral Ought

In a long and profitable discussion, I came up with a way of expressing desirism that I hope others may find useful.

From three facts, I will derive a moral ought.

In doing so, I will derive “ought” from “is”. However, I will not appeal to free will, intrinsic values, categorical imperatives, impartial observers, social contracts, or any similar fictitious entity. Nor will I sneak values in the back door cloaked in a value-laden term such as “flourishing” or “well-being”.

Specifically, assume that there is (1) a community of intentional agents where (2) each has an aversion to its own pain, and (3) each has a “reward system”.

An intentional agent is one that acts on beliefs and desires (intentional states).

A reward system is a system that processes rewards and punishments, including praise and condemnation, and uses it to create new desires and aversions, and to modify (redirect, strengthen, weaken) existing desires and aversions.

For purposes of this story, we do not need to know how these beings acquired this aversion to their own pain. Perhaps the creatures first evolved an ability to sense things by touch. Then they evolved a disposition to respond to certain sensations with MAKE IT STOP and MAKE SURE THAT NEVER HAPPENS AGAIN.

Similarly, the story of the origin of the reward system is irrelevant. The "aversion to pain" alone would have only allowed the agents to use means-ends reasoning to avoid future pain. The reward system may have evolved as a more efficient alternative. Without it, the agent would have to constantly remember that bees are a potential source of pain and one can avoid pain by avoiding bees. The reward system instead takes experience with bees and gives the individual an aversion to bees themselves. The individual then simply avoids bees. Avoiding pain is an unintended (but welcome) side effect of an aversion to bees.

However it came about, our intentional agents have an aversion to their own pain and a reward system.

In this initial situation, if an agent had to choose between (a) a scratch on its finger, and (b) the painless (for him and only him) destruction of the world, he has a reason to choose "not-a" (to avoid the scratch on his finger), but no reason to choose not-b (the painful-for-others destruction of the world). He would choose the destruction of the world over a scratch on his finger.

However, he also has a reason to cause others to have an aversion to causing pain to others. After all, “If I can cause everybody else to have an aversion to causing pain to others, then I can get them to avoid causing pain to me. My own aversion to my own pain gives me a reason to create this aversion in others.”

Our agent not only has a motive, he has a means. He can do this by praising and rewarding those who choose not to cause pain, and punishing and condemning those who cause pain.

These beings might have a primitive mind. “Condemning” might consist only in snarling, snapping, slapping, beating one’s chest, bearing one’s teeth. “Praising” would involve the sharing of food, grooming, play, and sex. This disposition to respond to those others who cause pain (where those others also have a reward system) might be instinctual, maybe even genetic, given the usefulness is preventing others from causing pain.

Or the creature might be a bit more intellectually sophisticated, and actually recognize the relationship between snarling and snapping at those who cause pain and their disposition to avoid causing pain to others, and thus be able to intentionally praise and condemn so as to produce this aversion.

More sophisticated beings with language may invent other ways to express approval and disapproval - to praise and condemn - such as using the terms "good" and "bad", "right" and "wrong", "hero" and "asshole".

But, let's not get ahead of ourselves. At this point we have a collection of individuals with an aversion to their own pain and a reward system. We have a collection of individuals with a reason to cause in others an aversion to causing pain, and a means to do so - by praising those who avoid causing pain and condemning those who do not.

In our hypothetical community, with our initial three conditions in place, we can show that it is also true that (4) everybody has reasons to promote, universally (in all other members of the community) an aversion to causing pain. This is not a new assumption, this is an implication of the assumptions we have already made.

We can soften that condition a little. We can allow that, perhaps, some creatures have no aversion to pain and, thus, do not have a reason to promote in others an aversion to causing pain. Yet, with this weakening, it is still an objective fact of this community that people, generally, have reasons to promote, universally, an aversion to causing pain by means of praise and condemnation, and little reason not to. This is a fact.

So, they adopt the practice of praising those who avoid causing pain to others, and condemning those who do not. As a consequence, they are somewhat effective at promoting a nearly universal aversion to causing pain to others.

We can further allow that complications and complexities mean that different people acquire this new aversion in different strengths. Perhaps it does not work on everybody, and some still lack this aversion to causing pain to others. However, this way of using praise and condemnation is generally effective. We are, after all, dealing with matters of cause and effect - which can be quite complex. These are not matters of logical entailment.

An individual who acquires this aversion to causing pain to others will react differently when confronted with a situation where he must choose between suffering pain himself and causing pain to others. If confronted with a situation where he must choose between enduring a great deal of pain himself or inflicting a small annoyance on others, he may still choose the small annoyance to others. However, when the choice is between a scratch on his finger and extreme suffering for others, he now has more and stronger reason to choose to endure the scratch on his finger. His aversion to the scratch is weaker than his newly learned aversion to causing pain to others in these circumstances.

If he does choose the suffering of others, people generally have reason to respond with snarling and snapping or other expressions of disapproval. If he chooses the scratch on his finger, they have reasons to respond with smiles and the sharing of food and other expressions of approval.

Now, let's deal with this "is/ought" problem.

If I tell you that you ought to do X, and you ask, "Why?", the only legitimate answer that makes any sense is for me to tell you of a reason to do X. If I cannot tell you of a reason to do X - a reason that actually exists - then you can reject my claim that you ought to do X.

That is to say, "ought to do X" implies "there is a reason for you to do X".

"There is a reason for you to do X" is an "is" statement. Deriving "is" from "is" is not at all problematic. Consequently, deriving "ought" from "is" is not problematic either.

Let's take a look at what David Hume presents as his argument against deriving "ought" from "is":

"For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it."

Hume is mistaken, this relation is not entirely different. The "ought" relation is an "is" relation that links an action to reasons for action that exist.

"Ought" is an ambiguous term. I would propose that we have many different meanings of "ought". These different "ought" meanings refer to different relationships between actions and reasons for action that exist.

The three principle meanings of "ought" are pro-tanto ought, practical ought, and moral ought. The "ought" I am talking about in the discussion above, "having a reason", is the "pro tanto ought". A pro tanto "ought to do X" is a statement that there is at least one reason - perhaps defeatable, and perhaps not very good reason, but a reason . . . to do X.

So, then, where do we get these "reasons to do X"?

Hume himself tells us where. On Hume's account, if there is a reason to do X, then there is a desire that would be served by doing X. Our agent's aversion to its own pain is his reason to avoid being in pain. This aversion to pain, where others have a reward system, is his reason to praise and condemn others so as to create in others an aversion to causing pain. Once a person has an aversion to causing pain, then they have a pro tanto reason to avoid causing pain. A person that has both a reason to avoid his own pain and a reason to avoid causing pain to others, forced to make a choice between enduring some pain himself or causing some pain to others, will need to look at whether and to what degree each aversion is served by the different options - weighing the service to each aversion against the service to the other, and judging which action best serves those aversions.

We need to carefully distinguish between the concepts, "Agent has a reason to do X" and "There is a reason for Agent to do X". These two statements do not mean the same thing. Yet, moral philosophers often use them interchangeably, generating a great deal of confusion.

The two phrases are different. Consider the phrase, "Agent has $10", and "There is $10." Certainly it is possible for it be the case that "there is $10" to be true, while "Agent has $10" is false. This is the case if somebody other than agent has $10.

Alph and Bet each have an aversion to their own pain. In our initial situation, before any praising or condemning has taken place, Alph has a reason not to cause pain to himself. Alph has no reason to avoid causing pain to Bet. However, there is a reason for Alph not to cause pain to Bet. Bet has that reason. While the reason that Bet alone has will not motivate Alph in any way to avoid causing Bet pain, it will motivate Bet to praise and condemn Alph, thereby creating in Alph a new reason - an aversion to causing pain to others. Once this new reason is created, then Alph will have a reason not to cause pain to Bet.

Now, let us go back to the definition of "ought". To say, "You ought to do X" is to say "There is a reason for you to do X." (Recall that I am talking about pro tanto ought at this point). "There is a reason or you to do X" does not mean "You have a reason to do X" - so the "ought" might not be in the least bit motivating. That is, it may not motivate you to do X. If the phrase "there is a reason" is referring to reasons other people have, then it is referring to reasons to praise you if you do X, or condemn you if you do not do X.

Now, we have an "ought" that refers to what people have reason to praise and condemn.

In other words, while there is an "ought" that refers to what an agent has a reason to do, there is another "ought" that refers to what to what people generally have reason to praise or condemn an agent for doing (or not doing). Another way to say this is that we have an "ought" that refers, not to what an agent HAS a reason to do, but to what an agent SHOULD HAVE a reason to do.

Returning to our original situation, where agents have an aversion to their own pain and a reward system, an agent has no reason to refrain from causing pain to others. However, the agent ought to have such an aversion. By this, I mean that people generally ought to praise those who refrain from causing pain to others and condemning those who do not. Which means that people generally have reasons to praise those who refrain from causing pain and those who do not. Which is true because people generally have desires that would be served by praising those who refrain from causing pain and condemning those who do not. Those desires that would be served are their own aversions to pain.

Now, finally, to say that it is wrong to cause pain to others is to say that a person who had the desires she ought to have (including an aversion to causing pain to others) would not have caused pain to others in those circumstances. It is wrong, and it is a type of wrong that people generally have reason to respond to with condemnation.

That's it then. From (1) a group of intentional agents who (2) each has an aversion to their own pain and (3) a reward system, we get "It is wrong to cause pain to others". There are reasons to praise those who refrain from causing pain, and to condemn (even to punish) those who do not. And there are few (if any) reasons not to do so.

There is no god, no intrinsic values, no categorical imperatives, no social contract, no free will, no impartial observers, no veil of ignorance. There is also no "sneaking values in the back door" by using value-laden terms such as "flourishing" or "well-being". There are simply relationships between states of affairs and desires, where desires provide reasons for action.

Friday, August 17, 2018

Moral Progress

Moral Progress (20180817)
Alonzo Fyfe


I have been asked to explain moral progress and moral disagreement
.
I have written this presentation several times. However, I keep learning new things, so, rather than referring back to an original example, I keep hoping that each time I can do it better.

A Theory of Morality

I start with a physics model. I note how physicists, when they seek to explain physical forces, begin by assuming a simplified universe of frictionless surfaces, perfectly spherical bodies, massless strings, or a universe consisting of only two bodies. Once we understand the simple mechanics, we can add in complexities.

Simple Beginnings

My simple universe is a community of beings with only one desire - an aversion to their own pain. This aversion can be expressed as a propositional attitude - a “desire that I not be in pain” that assigns a negative value - a “to be preventedness” to any state of affairs where “I am in pain” is true. In having an aversion to their own pain, each person has a reason to avoid (to prevent or to stop) being in pain.

Some may hypothesize that everybody also has a reason to prevent or stop others from being in pain. On this hypothesis, there is something in the nature of “that entity is in pain” that generates a reason to prevent or stop it from being the case that “that entity is in pain”.

We are going to hypothesize that no such power or entity exists. Each entity has only one reason for action - the prevention of their own pain. This means that if a being encounters a situation where he must choose between (1) a mildly annoying scratch on his finger, and (2) excruciating pain for everybody else, his only reason for intentional action is to avoid the mild irritation on his finger.

However, we will postulate the existence of a reward system. This system processes rewards and punishments (including praise and condemnation) to generate new reasons for action or modify existing reasons. If, by pressing a button, one gets fed, one acquires an interest in pressing the button - not only so that one can get fed, but for its own sake. If, by pressing the button, one gets an electric shock, one acquires an aversion to pressing buttons. This is not just a means to avoid pressing buttons. One will tend to avoid pressing buttons even when one knows it will not produce a shock. One acquires an aversion to (and, thereby, a reason not to) press the button.

So, by praising and rewarding those who refrain from actions that cause pain to others, and punishing and condemning those who do cause pain to others, one can create in others an aversion to causing pain to others. In creating this aversion, others will come to have two reasons for intentional action: to prevent the realization of their own pain, and to avoid actions that bring it about that others are in pain.

In this way, others actually acquire a reason not to cause pain to others. However, it does not arise from the very nature of others being in pain. It arises as a result of rewards (such as praise) and punishments (such as condemnation) acting on their reward systems to create aversions to causing pain.

The effect is the same. As a result, agents come to have two reasons for action that sometimes conflict. Now, faced with a choice between a mild irritation on their finger and excruciating pain for others, the aversion to causing pain to others outweighs the aversion to avoid one’s own pain, and chooses the mild irritation. However, if faced with a choice between excruciating pain for oneself and a mild irritation for another, the agent may still choose the mild irritation for the other.

Explanatory Power

We have now a situation where everybody has a reason not to cause pain to others. However, it does not require the existence of a mystical power of one person’s aversion to his own pain to automatically (magically) generate reasons for others. Furthermore, it explains and predicts the ubiquitous use of praise and condemnation (and other forms of rewards and punishments) in morality.

It explains why punishment and condemnation are the appropriate response to wrong acts. Obviously, punishment and condemnation cannot prevent the action that has already taken place. However, the act shines a spotlight on an area where it would be useful to apply additional reasons-generating punishment and condemnation. It is the person who performed the wrong act who is deserving of condemnation and punishment precisely because there is reason to promote an aversion to doing that type of act. The punishment must be “proportional” to the wrong since the strength of the aversions we have reason to create is proportional to the strength of the reasons we have to create that aversion. We have fewer and weaker reasons to promote aversions to trivial wrongs than we do to massive large-scale wrongs. The theft of $5 deserves less punishment and condemnation than the rape and murder of a child.

The “inherent power to create reasons in others by the very nature of a wrong” cannot account for these features. Instead, these become arbitrary, unexplained elements of the mysterious reasons-generating power. In the same way that this power mysteriously generates reasons for others, it mysteriously generates a rightness to responding to wrong behavior with condemnation and punishment, and it mysteriously dictates the appropriate level of condemnation and punishment, none of which can actually be explained.

Note that it is not a valid objection to this theory to state that it cannot account for moral behavior – for the fact that one person helps another at what would otherwise be a cost to himself, or that a person refrains from taking the property of another even when she can do so without getting caught. It explains this behavior as a consequence of desires and aversion acquired through social conditioning.

A person with a desire to help others has a reason that may well tilt the balance in favor of helping, when all other concerns would have said not to help. A person with a learned aversion to taking the property of others without consent is no more likely to take the property of others when she can get away with it than a person with an aversion to pain is likely to burn herself when she can get away with it.

More importantly, the theory being proposed provides a more sensible real-world account of what those reasons are and how they came to exist. By praising and rewarding those who help, a society creates in others a desire to help that, at times, will tilt the balance against all other concerns. By condemning and punishing those who take the property of others without consent, we create an aversion to taking the property of others without consent, which leaves our property secure even when others can take the property without getting caught.

This system also explains why the reasons different people have do not have the same strength – why some people will not help when other people will not, why some people steal when others (better) people would not.

Part of the reason for this is because the basic biological foundation on which experience works on is not the same for all of us, so identical environments will still not produce identical results. Yet, even with this, we do not have identical experiences. One person may have taken property without consent as a child and gotten away with it, obtaining the benefits (which serve as a reward) without he punishments, thus not developed an aversion to taking property without consent. Taking property may have been essential to survival. As an adult, he teaches his children to take property successfully. This weakens the aversion to taking property in those children. Of course, those children may also experience the condemnation and punishment of others, and are certainly told of the harms of getting caught. Consequently, they may be torn – their feelings pulling them in two contradictory directions.

In the same way, this system explains the cultural variation in attitudes. Different people in different cultures experience different patterns of reward and condemnation, giving them different reasons for engaging in certain types of behavior. Growing up white in a culture of black slavery or white privilege, they experience rewards and praise for racist attitudes, and learn an attitude of contempt towards blacks. Growing up in a culture where homosexuality is condemned, they acquire negative attitudes towards homosexuals and homosexual acts. These are the consequences of rewards and punishments.

The Cost of Error

I have contrasted two theories of morality.

In one theory, the very nature of one person’s suffering generates in others a reason against causing suffering.

In the other theory, the very nature of one person’s suffering and the fact that he is surrounded by people whose brains contain a “reward system” implies that he has reason to use rewards and punishment (including praise and condemnation) to cause others to have aversions to causing suffering – aversions that give others reasons against causing suffering.

Both theories end up at the same point – a reason against causing suffering to others. There is nothing in this reason itself that carries a marker of its origins. All one knows is that one has a reason. It takes theoretical work to try to figure out where it came from.

This means that it is possible for a person who has a reason against causing suffering brought about by the second system to make a mistake and think that it came about through the first system. Such a person believes that the reason they have against causing suffering exists in virtue of being able to correctly perceive an intrinsic reasons-making property in other people’s suffering.

This can be a very costly error.

A person who believes that she is under the watchful eye of a guardian angel who will not allow her to suffer harm may take risks that she would not otherwise take. This could have unfortunate consequences as the act she thinks is safe turns out not to be safe at all.

The person who thinks that reasons come from the nature of things may fail to take steps to create those reasons in others – or to properly advocate that others in the community do the same thing. The result is a community where people are more prone to lie, steal, vandalize, engage in fraud, kill, and engage in prejudiced and discriminatory acts that they would not otherwise engage in if the causes of reasons were properly understood. The advocate of natural reasons is more prone to let the cards fall as they may. After all, the reasons already exist. It is not as if he needs to put any work into creating them.

More importantly, a person with a learned desire that p or desire that not-q who takes their reasons to realize p or prevent q to be in virtue of an intrinsic reasons-generating property is not going to be open to evidence that their attitude should change. His aversion to homosexual acts is taken to be caused by an internal “not to be doneness” built into homosexual acts. Any argument to the effect that this attitude is harmful and should be changed will be met with the response, “But it is wrong by its nature, and your arguments that we would do better with a different attitude are irrelevant.”

The intrinsic reasons-generating property theory is inherently conservative and resistant to change. It takes the learned attitude of the agent and makes it an intrinsic property of that being evaluated. Ultimately, the agent’s own learned likes and dislikes are taken to be a marker of all that which is good and right in the world. He ignores evidence concerning whether having such an attitude – having it himself and having it made universal throughout the community is such a good idea, and insist instead he is seeing intrinsic reasons-making properties that make such arguments irrelevant.

This is particularly dangerous when two people – or two groups – come into conflict. One of them has promoted in their community a desire that p, while the other has promoted a desire that not-p. Members of the first community have reasons to realize p, and members of the second community have reasons to prevent its realization. If each take their reasons to be generated by an intrinsic reason-generating power in p, they have a conflict, and there is no way to resolve this conflict short of war. If, instead, they realize that their reasons are grounded on learned preferences – taught through social customs of praise and condemnation – they can then ask, “Which learned preferences do we really have the most and strongest reasons to promote?” To answer this question, they can enter into debate and discussion.

There is a right answer. The fact that we are arguing about the relative merits of promoting particular sentiments universally (rather than about the intrinsic reasons-making properties of things) does not imply that the answer is merely a matter of individual taste or preference.

Moral Progress

But is there moral progress? Can one culture be better than another?

Let us return to the starting community populated by people with an aversion to their own pain and a reward system. These people have reasons to prefer a community populated by people with an aversion to their own pain, a reward system, and an aversion to causing pain to others. The latter community is clearly better than the former community.

This community might well (falsely) believe that if they promote an aversion to causing pain to others that an all-powerful god will punish all of them with excruciating pain. Suffering from the effects of this belief, they actually condemn anybody who promotes an aversion to causing pain to others. Advocates of this vile philosophy that risks their god’s vengeance are burned alive as a warning to others. Regardless of what they believe, and regardless of the feelings that are generated as a result of this error, they still have more and stronger reasons to form a society that includes an aversion to causing pain to others. They are simply unaware of this moral fact.

Becoming aware of it, and then creating that universal aversion, represents moral progress.

The abolition of slavery, social equality for women, the combatting of child abuse, recognizing the moral permissibility of homosexual relationships, all represent moral progress. They all represent creating sentiments (like the aversion to causing pain in the hypothetical example) that people generally actually have reasons to promote universally.

Moral Argument

This also brings to the surface the possibility of error and moral debate. In the example given above, the people who (falsely) believe that a god will punish them with excruciating pain if they were to reward/praise behavior that avoids causing pain and punish/condemn acts that cause pain are barriers to moral progress. They are wrong. Regarding the reasons for promoting a universal aversion to causing pain to others, they are on the wrong side of the issue, and those who argue for creating such an aversion are on the right side. This goes hand-in-hand with the thesis that there is such a thing as moral progress, and the false believers are standing in the way of progress.

In some cases, it may be difficult to determine what attitudes we have reasons to promote. In the case of capital punishment, the “ultimate punishment” for the performance of certain types of crimes may be useful in promoting even stronger aversions to committing those types of crimes. This would give us reason to promote universal acceptance of – and even a desire for – this type of punishment. On the other hand, capital punishment is killing. Promoting a universal desire to kill may not be such a wise idea.

One might respond, “No, we are only cheering the killing of the guilty!” That might be the intent, but the cheering takes place in the real world and one has to look at the real-world effects. In a population of 300 million people, cheering the killing of the guilty will be internalized in some of the population as killing those who deserve it. They may classify as “deserves it” people who have committed the slightest wrong – who they perceive as having cut them off on the highway, looked at them without a proper acknowledgement of respect, or failed to hand over money the person thought he had a right to take. Attitudes about who deserves killing are going to differ. As a result, cheering killings may create more murders than it prevents.

We can have genuine disputes over issues such as government assistance to the poor, taxation, what to allow people to say and what to prohibit from being said, the treatment of animals, the treatment of future generations, the accumulation of wealth. We may well discover reasons for and reasons against promoting certain attitudes. We may find people claiming that reasons exist that do not, in fact, exist, such as the “vengeful god” referred to in the hypothetical example – the god that would inflict suffering on a community that promotes an aversion to causing suffering.

Disagreement does not imply that there is no right answer. Nor does it get in the way of people saying, “Here are good reasons to promote such an attitude” and backing it up with empirical evidence, and “Those are bad reasons for promoting such an attitude” and backing that up with empirical evidence as well.

Moral Persuasion

So, now, imagine that you have been captured and taken into the woods by somebody who plans to cook you slowly over an open fire and then eat you. You desire that this not be the case. You wish to try to prevent this from happening. What are your options?

According to the theory presented here, you have three options.

Option 1: You can try to point out to him that he already has a reason not to cook you and eat you that he might not be aware of. Perhaps you can convince him that you are riddled with parasites, and he certainly does not want to eat somebody riddled with parasites. Another option is to convince him that you have an all-knowing, all-powerful, invisible friend who will punish anybody who cooks and eats you, causing them endless suffering. This wouldn’t be true, but it might be effective.

Option 2: Change his desires. If you have a pill that causes people to have an aversion to killing and eating others, see if you can find a way to get him to eat it. Failing that, he has a reward system, and it may be that by praising the decision not to cook and eat you, and by condemning the decision to do so, you may effect a change in his desires and create within him an aversion to cooking and eating people. This takes a lot of work though – a lot of time and a string of experience – so it will likely not be effective if he plans on eating you for the evening meal.

Option 3: Prevent him from acting. Kill him, or escape, or . . . better yet . . . do both, in whichever order is most convenient (or possible) in the circumstances.

These are your three options. (1) Convince him that cooking and eating you will thwart a desire he already has, (2) create within him a new desire that would be thwarted by cooking and eating you, (3) make it impossible for him to cook and eat you.

You could try to tell him that, by its very nature, cooking and eating you has its own intrinsic reasons-creating property and that in virtue of this fact he already has a reason not to eat you that he might not be aware of. Personally, I would go with the story of the all-knowing, all-powerful invisible friend. It sounds more plausible.

There is another option, but it requires some advanced planning. If successful, you will not even end up in this situation, so you will not need to ask yourself what to do if you were in such a situation. This is to get together with others in your community and convince them that you all should use your collective powers of reward and punishment, including praise and condemnation, to promote universally an aversion to cooking and eating people.

Given the massive complexities of human society, the massive complexities of the human brain (including other genetic and environmental influences), and the massive varieties of experiences a person may have, one will not eliminate the possibility of being killed and eaten. However, to the degree that one can promote such an aversion, to that degree one can at least reduce the chances.

Once again, I want to remind the reader that, when it comes to “what desires do people generally have the most and strongest reasons to promote universally using these tools of reward (such as praise) and punishment (such as condemnation)” has a right answer – at least in many cases. In our original simple universe, the value of promoting universally an aversion to causing pain was not a matter of opinion. In our society, the value of promoting aversions to lying, fraud, sophistry, theft, vandalism, assault, rape, murder, and the like is not a matter of opinion. There is an objective fact of the matter. People generally have many and strong reasons to promote, universally, these desires and aversions by praising those who act correctly and condemning/punishing those who act wrongly.

Conclusion

This, then, is a brief account of the position I have been defending.

There are no intrinsic reasons-creating properties in the natures of particular states of affairs. There is, instead, a set of desires and aversions and a learning system we evolved to have because it was useful in generating new desires and aversions. Once this reward system came into being, it was there for others to manipulate. All they needed to do was to manipulate each other’s experiences – their rewards and punishments – to generate behavioral rules useful to those who were doing the manipulating. With hole communities usefully manipulating everybody’s learned desires and aversions having, in their effect, the (unintended) well-being of the whole group, the institution of morality came into existence.

This system is so simple that animals could use it. If one animal performs an action that is perceived to threaten the interests of another animal – either its own interests or its interest in the well-being of others such as its children, mate, and friends – it has reason to respond with condemnation. This takes the form of snarling, growling, swiping with paws, and making other aggressive gestures. If they perform useful actions, others can reinforce those useful actions using rewards such as grooming, food, sex, or even positive gestures such as a smile or pleasant “approving” noises.

This does not need complex cognition. It needs nothing more than, “If it seems favorable, respond with rewards and praising noises. If it seems dangerous, respond with punishments and condemnation noises”. We become angry, shout, and insult those who perform actions that we perceive as a threat. That this seems so natural is no accident. It is because harm to us does not have any intrinsic reasons-making properties, but we can act on the reward systems of others to create those reasons nonetheless.

People generally have many and strong reasons to promote some desires and aversions universally. This applies to aversions such as lying, theft, assault, rape, and murder. It applies to desires such as keeping promises, repaying debts, and helping those in desperate need. There are right answers to the questions about what sentiments to promote and what sentiments to avoid promoting.

From this, without any need for the existence of intrinsic reasons-making properties, we get objective morality.