Makarios asked a series of questions yesterday which, though I clearly could not answer in a comment, and cannot even answer completely in a post, I would like to give a partial answer to. For all practical purposes, Makarios asked, “How do you, Alonzo, know that your moral principles are correct?”
It is an important question for me. Every once in a while I discover a case of people actually applying the position that I defend. When I do, I get this sudden burst of anxiety and think, “Oh no! What if I am wrong?” I feel obligated to run through the arguments again to look for holes. Mostly, I go through the objections that others have raised and see if I have answers.
I do have reasons for thinking that I am not wrong, and it never hurts me to review them.
Another reason for this posting is that it is an important addendum to an earlier post where I criticized Richard Dawkins’ claims about morality. In “Richard Dawkins: Morality and the Selfish Gene” I complained that Dawkins may have made a case for genetic altruism, but he never did get around to talking about morality. In making this objection, I asked a set of questions that Dawkins’ theories do not even touch.
Can I answer those questions?
Success or failure here determines if one is working on a viable moral theory.
The Phenomena of Morality
A moral theory, like all theories, is to be judged according to its ability to account for the various elements of that which the theory is about. A theory of star formation best explains the phenomena of star formation and the types of starts that result. So, a theory of morality best accounts for the components of morality.
‘Ought’ implies ‘can’. It makes no sense to say that a person ‘ought’ to do something that is impossible. For example, it is not the case that a person ‘ought’ to teleport a child out of a burning building unless it is within his powers to do so. Many theories hold that this requires a force of ‘free will’ with which humans have the power to suspend the laws of physics. This solution is highly suspect. What is this ‘free will’? How does it work? Desire utilitarianism, on the other hand, suggests that this implication captures the fact that morality is concerned with molding malleable desires – those that social forces can influence. It says that it makes no sense to apply these social forces where they can have no effect.
‘Facts’ and ‘values’. Philosophers have generally held that there is a distinction between facts and values. Scientists deal with facts, and values are . . . what? Accounting for values as entities that affect the real world but are not facts is problematic. A better distinction is to say that ‘values’ are claims about relationships between states of affairs and desires, while ‘facts’ are about everything else. The reason we cannot derive ‘ought’ from ‘is’ is because we cannot derive a conclusion about how a state of affairs relates to a set of desires – unless our premises contain facts about those desires. Once we add desires to our premises, we can derive oughts.
‘Prohibition’, ‘Permission’, ‘Obligation’. There are three categories of morality as applied to action. All forms of act-utilitarianism say that there are only two; ‘obligation’ (that which maximizes utility), and ‘prohibition’ (everything else). All act-utilitarian theories fail here. Desire utilitarianism accounts for these three categories because they represent three different types of desires. There are ‘good’ desires that we have reason to promote everywhere. There are ‘bad’ desires that we have reason to inhibit everywhere. And there are ‘neutral’ desires that we have reason to promote for some people but not everybody. These ‘neutral’ desires – the desire to paint, to teach, to study astronomy, to design a building, to play football – is the source of our moral permissions.
‘Negligence’. Moral theories that base moral judgments on the intentions of agents cannot account for negligence. The negligent person does not intend to harm others. Yet, because of his inattention, he does so anyway. Desire utilitarianism defeats intention-based moral theories because it can account for negligence. The negligent person’s fault is that he lacks a good desire, or that good desire is not sufficiently strong. That good desire would have motivated him to take precautions to avoid causing harm to others.
The Bad Samaritan. The Bad Samaritan is the term used to represent the moral problem of the person who does the right action, but does it for a bad reason. He saves a drowning child because he wants to be seen as a hero (so he can win the next election). He turns in a notorious criminal for the sake of the reward. His actions are not wrong, but his desires do not allow us to classify him as a good person. Desire utilitarianism says that the right act is the act that a person with good desires would perform. It does not care about the agent’s actual reasons. It is quite possible for an agent to do what a person with good desires would do, only do so for bad reasons.
Weighing Rights. Rights are not absolute. They have weight. The right to freedom of the press ends where the state has a legitimate interest in protecting national security. The right to freedom of the press ends does not extend to libel or slander. Desire utilitarianism classifies a ‘right to X’ to mean ‘people should have a strong aversion to depriving people of X’. , and in some instances one person’s rights are outweighed by duties to others. We must weigh the right to freedom of the press with the government’s obligation to provide for national security. The right to freedom from religion is not a right to force others to attend one’s church. Desires also have weight (or strength). Rights, understood as things for which people generally should have a particularly strong aversion, is compatible with the idea that rights have weight.
‘Mens rea’. In order to prove moral culpability, one must prove ‘mens rea’ (or ‘guilty mind’). Mens rea comes in four flavors; intentionally, knowingly, recklessly, and negligently. Desire utilitarianism holds that moral judgment rests in determining whether agents have good desires or bad desires. Bad desires and the absence of good desires is the ‘guilty mind’ that culpability is looking for. All four of these moral categories can be understood in terms of evidence for the presence of bad desires or the absence of good desires.
‘Excuse’. When a person does something that, at first, appears to be wrong, he can sometimes save himself if he can offer a legitimate excuse for his actions. A driver runs over a pedestrian, but defends herself by showing that the car had an unforeseeable mechanical failure, or she ran over a terrorist who was about to detonate an explosive vest. There are several different types of excuse. What all of them have in common is that they break the implication from a prima-facie bad action to the agent’s desires. They prevent people from inferring that a person with good desires would not have done the same thing.
Moral Subjectivity and Objectivity. Values seem to be subjective. Desire utilitarianism can handle that. Values are relationships between states of affairs and desires. They cease to exist where desires cease to exist. Yet, at the same time, moral values seem to be objective. They do not depend on the agent wants. Here, desire utilitarianism holds that the value of a desire depends on its tendency to fulfill or thwart other desires. It concerns whether people generally have reasons to promote a particular desire universally (a virtue) or to inhibit it universally (a vice). Moral virtue and vice depends on desires, but is substantially independent of the desires of the agent.
These are only some of the elements of morality that desire utilitarianism can account for. There are others that I have not mentioned. Other challenges for a moral theory include accounting for: (1) moral dilemmas (a rare state in which very strong desires one should have demand conflicting actions), (2) the moral education of children, (3) accounts for all value terms such as illness, injury, beautiful, useful, crisis, healthy, beneficial, fortunate, and the like (explaining not only what all terms have in common that make them value terms but also what makes each term different from another), (4) supererogatory actions (above and beyond the call of duty), and (5) the relationship between value and reasons for action.
All of this is superficial. The book listed up there on the right side of this blog, “A better place,” examines each of these issues in far more detail. For my purposes here, consistent with the space allowed, I can offer only a brief outline
External Connections
Another way to evaluate theories is through the strength of its connections to other fields of study. A zoological claim has merit not only because it explains and predicts the behavior of the animal, but because the claim is consistent with what we know about chemistry, physics, climatology, geology, mathematics, logic, history, and the like.
Some of the reasons why I believe that the theory I use in these posts has merit is the strength of its connections to other fields of study.
Metaphysics. The theory only makes use of regular every-day phenomena. It talks about desires as propositional attitudes – a desire that ‘P’ is a line of brain code that motivates an agent to make or keep the proposition ‘P’ true. It talks about states of affairs. And it talks about the relationships between them: A desire that P is fulfilled in S if and only if P is true in S. There is no ‘free will’ that allows us to suspend the laws of physics, no intrinsic value, no God, no ‘categorical imperatives’, no meeting behind a veil of ignorance, no ideal observer, no social contract, no ‘man qua man’. There are desires, states of affairs, and relationships between them.
Action Theory: This theory employs the most widely used theory of intentional action. It is a theory that explains intentional action as the product of beliefs (a belief that ‘P’ is the attitude that the proposition ‘P’ is true), and desires (a desire that P is a mental attitude that the proposition ‘P’ is to be made or kept true). This produces intention, which (in the absence of a physical defect or restraint) produces intentional action.
Evolution. Evolution has certainly influenced our desires so that we tend to want those things that, in turn, tend to cause the replication of one’s genes. We tend to desire sex, to care for our children, the types of food that kept our biological ancestors alive, a particular climate, and an aversion to pain where pain tends to be caused by that which threatens our reproduction. However, evolution also gave us a brain that is molded by interaction with our environment. We learn, and through learning we acquire beliefs and desires that do not come from our genes. Clearly, there is no gene for believing that today is Thursday or that Saturn has rings. There are also no genes determining some of my desires. Morality is concerned with those malleable desires. Not only is morality about malleable desires, but it is concerned with how those desires can be molded – particularly through praise, condemnation, reward, and punishment. These facts also fit into evolutionary theory.
Economics. Value in terms of relationships between states of affairs and desires are easily translated into economic concepts of goods (ends – states of affairs in which the propositions that are the objects of one’s desires are true) and price (types of effort that go into creating those states of affairs).
Overall
So, why do I think that desire utilitarianism is a good theory? Because no other theory accomplishes so much (in terms of accounting for the elements of morality as well as connecting to the other branches of knowledge) with so little (uses only desires, states of affairs, and relationships between them).
If there is another theory that does as well, then we should use it.
Either way, the test is: Which theory provides the most efficient account of that which we know as morality?
19 comments:
I very much like your theory, but my question at this point would be this:
Are you really accounting for these 'components' of morality, or are you defining them?
If everything flows from your definitions, then what you have is not so much a theory, but a way of interpreting moral statements.
Define 'ought' differently, and you would get a different moral theory, that would seem morally inferior - but how do I make that last judgement?
It seems to me that I am just looking for a good generalisation and rationalisation of the feeling, inculcated in childhood, that I shouldn't fight or steal. And thus, I would prefer desire utilitarianism over deontology for example.
But had I grown up in a society where fighting was considered good for the gene pool, and stealing considered to show initiative, then I would guess that egoism would be more appealing and I would plump for that.
Moral Subjectivity and Objectivity. Values seem to be subjective. Desire utilitarianism can handle that. Values are relationships between states of affairs and desires. They cease to exist where desires cease to exist. Yet, at the same time, moral values seem to be objective. They do not depend on the agent wants. Here, desire utilitarianism holds that the value of a desire depends on its tendency to fulfill or thwart other desires. It concerns whether people generally have reasons to promote a particular desire universally (a virtue) or to inhibit it universally (a vice). Moral virtue and vice depends on desires, but is substantially independent of the desires of the agent.
You do realize that this is still subjective, right? I will always want to act in accordance with my desires, but there is no rational reason for why I should account for other people's desires. The reason I do is because of my social and genetic inheritance, but a hypotethical rational agent would not be required to take into account other people's desires to be consistent.
Joe Otten
A theory always does both - account for a phenomena and provide a set of definitions.
Water = H2O not only accounts for the properties of water, it became the definition of 'water'.
F = m*a (force = mass * not only accounts for force, but defines 'force'.
If you were looking for a way to rationalize some feeling that you have, then desire utilitarianism would not be useful. Desire utilitarianism bases morality on the feelings you should have, which are the feelings that people generally have reasons to cause you to have, which are not necessarily the feelings you actually do have.
The actual reasons to prefer desire utilitarianism over deontology have nothing to do with feelings. Mostly, deontology bases all moral evaluations on the concept of 'right action'. However, it cannot tie 'right action' into 'reasons for action' without postulating intrinsic values. Intrinsic values do not exist.
Even if deontology cooresponded in some way with your feelings, this would still be an insurmountable problem.
Similarly, 'egoism' depends on an equivocation in terms. Specifically, it equivocates between the concept of 'desires of the self' (the desires that an agent has), and 'desirs in the self' (desires that a person has in his or her own well-being).
If egoism were instead to use the concept of 'desires of the self' consistently, then the egoist would be a desire utilitarian. Namely, this is because 'desires of the self' can be 'desires in the well-being of others'. The egoist will discover that people generally have reasons to cause others to have desires that fulfill the desires of others, and can promote those desires through praise and condemnation.
If, instead, the egoist were to stick to the concept of 'desires in the well-being of the south', the egoist will need to define 'well-being'. He will not be able to do so without inventing some sort of intrinsic value property (as Ayn Rand does with her 'man qua man'). That would be a problem for the theory.
Simon
I assume that you realize that your statement "I will always want to act in accordance with my desires," is circular, since your desires are what you want.
Desire utilitarianism says that a person always acts so as to fulfill the more and the stronger of his desires given his beliefs. It also says that a person seeks to act so as to fulfill the more and stronger of his desires. However, false beliefs may prevent him from doing so.
So, for example, a thirsty person acts to fulfill his desires by drinking the contents of a glass that he believes holds water. Instead, it holds poison. He acted so as to fulfill his desires, given his beliefs. He sought to act so as to fulfill his desires. Yet, false beliefs prevented him from actually fulfilling his desires.
As for your statement, "there is no rational reason for why I should account for other peoples' desires", I have to ask, "What does 'should' mean"? What exactly are you saying here?
You go on to say, "The reason I do is because of my social and genetic inheritance." However, please note that thisis not a 'should' statement. This is a 'does' statement.
First you say 'should', then you say 'do', as if to say that 'what I should do' means 'what I will do'. Yet, clearly, they do not mean the same thing.
Desire utilitarianism agrees that what you will do at any given time is that which will fulfill the more and stronger of your desires, given your beliefs. However, it also says that what you seek to do is to fulfill your desires.
So, the question 'should I drink what is in the glass' does not mean 'will I drink what is in the glass' but, 'if my relevant beliefs were true, would I drink what was in the glass'.
This is one definition of 'should' - known as the 'should' of practical reason. There is another definition of 'should' - the 'should' of moral obligation.
Now, we've established that you seek to fulfill your desires. One way for you to fulfill your desires is to cause others to have desires that tend to fulfill your desires. It certainly would be more effective than causing others to have desires that tend to thwart your desires.
You should realize that those others also have reason to promote desires that fulfill other desires. This is a fact. You can't deny it. (Well, you can deny it, but you would be wrong.)
So, we have the concept of desires that people generally have many strong reason to promote, and desires that people have many strong reason to inhibit (the latter being desires that tend to thwart other desires). Furthermore, people generally have a set of tools for promoting the former type of desires and inhibiting the latter type of desires - these being praise, condemnation, reward, and punishment.
Now that we know that these things exist, let us give them names. Desires that people generally have reason to promote, I will call 'virtues', and desires that people generally have reason to inhibit, I will call 'vices'.
When I apply these names 'virtues' and 'vices' I am not adding anything. A virtue is still nothing more than a desire that people generally have reasons to promote through the tools of social conditioning. You can call them something else if you want. They will still be desires that people generally have reasons to promote through social conditioning.
Now, we have three categories of actions.
"The act that will fulfill the more and stronger of my desires, given my beliefs" - what the agent will do.
"The act that will fulfill the more and stronger of my desires" - what the agent 'should' do in the practical sense of 'should'.
And
"The act that will fulfill the desires of an agent who has the desires that people generally have reason to cause agents to have, using social tools such as praise, condemnation, reward, and punishment" - what the agent 'should' do in the moral sense.
Nothing you can say or do will prevent an act of this third type from being what it is. You can say, "I do not care" - and that may be true. Yet, it is still the case that others have reason to cause you to care, or at least (through threats of punishment or offers of reward) cause you to act as one who cares would act.
A 'hypothetical rational agent' has reason to cause others to take into account his desires. And a 'hypothetical rational agent' still lives in a society where other 'hypothetical rational agents' have reason to make him into somebody who takes into account their desires. So, a 'hypothetical rational agent' still lives in a community where people have reason to use praise, reward, condemnation, and punishment to promote those desires that fulfill the desires of others, and inhibit those desires that thwart the desires of others.
He also lives in a society where people have reason not to use these long and convoluted descriptions. So, just so that we can speak more efficiently, we will call the former 'virtues', the latter 'vices', acknowledge that people generally have reason to praise and reward virtue, while condemning and punishing vice, and use a definition of 'should' that means nothing more or less than 'what a virtuous person would do'.
Craig
Also, see, The Hateful Craig Proglem for a more thorough development of the line of reasoning used here.
I read your "Hateful Craig Problem", and I think I understand better what you're doing. However, when you say something is wrong to do, you're not really expressing any objective fact. You're not saying that you can establish a moral obligation that requires a person to not do that thing, only that people have reason to desire that you don't do such a thing and perhaps that they have reason to prevent you from doing it.
Well, what people have reason to do is relative. If the majority has reason to enslave a minority, then enslaving that minority is a virtue.
That is fine with me, to a point, because I don't believe it's possible to say much more. However, I've seen that you have lashed out at subjectivist ideas and the idea that this is not objective before. So long as you see that this really isn't objective, but rather relative, it's fine.
All your ethics seems to be reducible to descriptive statements. "Such and such is a virtue" is reducible to "such and such is what people have reason to desire that people do." This doesn't establish the prescriptive part, the part that morally obliges you to do this if you want to stay within reason.
A better look by Richard Dawkins about Morality is the documentary "Nice Guys Finish First." I'm not sure if You've heard about that or not...
I gave my own view, with a very limited scope here:
http://www.joshuamcharles.com/blog/?p=804
First, I do not understand your phrase 'moral obligation that requires a person to not do that thing'.
Requires, in what sense?
If you are talking about a fact, independent of desire, where mere awareness of the fact compells action, then you are talking about intrinsic value. Intrinsic value does not exist. My response here is to say that morality is not about such entities.
The mistake here is in thinking that anybody needs to come up with such a thing. The universe is filled to the rim with different types of objective facts, and there is no justification for claiming, 'if morality is not an objective fact of type T, then it is not an objective fact.' Not if it can be an objective fact of type Q.
A different type of objective fact, that takes care of all of the work, is the fact that people generally have reason to promote (through social forces) desires that tend to fulfill other desires. It is an objective fact. It does all of the necessary work.
Next, you say, "If the majority has reason to enslave the minority, then enslaving morality is a virtue."
No. A virtue is a desire. Enslaving a minority is an action (or a policy). Desire utilitarianism looks at whether people generally have reason to encourage others to desire slaves (or slavery). However, if others desire slaves or slavery, this puts the agent himself at risk. His own liberty is hardly secure in such a society. It is better to cultivate in others an aversion to slavery.
Actually, when I lash out at 'subjectivist' ideas, what I lash out at is what I call 'common subjectivism' - the idea that moral values are based on the likes and dislikes of the agent (or assessor).
Finally, I deny that there is an exclusive distinction between descriptive and prescriptive statements. Value statements are both, at the same time, descriptive and prescriptive. "X is such as to fulfill these desires" both describes a relationship between X and those desires, and at the same time prescribes X for those who have the desires. Moral statements, since they evaluate desires based on their relationship to all other desires, describe desires as being such as to fulfill the desires of others, and prescribe those same desires for the 'others' whose desires would be fulfilled.
It’s generally better to avoid speculation when it’s not necessary, isn’t it? Especially speculation made by a third party which doesn’t have the benefit of direct observation? When confronted by an action that is good, that doesn’t injure anyone, that helps someone who finds it helpful, that doesn’t lead to unnecessary pain, suffering or death, that action is something that actually happens. It is concrete, in the sense that it is part of reality, it affects reality. It is the most important evidence in the situation but that doesn’t get to the much lesser question of motivation.
Asking why the action was done might get an answer from the person who did it. Unless there is compelling evidence that they are lying, what that person says is the testimony given by the only possible witness of whatever motivated the action, their answer is the best evidence. No one else, anywhere, has a direct experience of that motivation. That the person explaining thier action might be lying or might be wrong doesn’t change that fact, it only means that less than complete reliability might be the best that can be hoped for. And even with direct evidence from one person who performs an act, that doesn’t tell you anything about another person’s motives, or the motives of the person in another act. There isn’t any reason to suspect that the motives would be the same in every instance even if the acts are similar.
In the absence of that information anything else will be speculative and not particularly valuable for finding out what motivated the action with any level of reliability. Trying to fit the facts into some kind of framework, theoretical, philosophical..... is a really inadequate substitute and carries even greater potential for distortion than the direct testimony of the person who did the action. Abstract analysis carries a high potential for distortion, it quite often carries both a pre-fabricated structure and an agenda which make the distortion a much higher likelihood than the with direct testimony of the person performing the action. I fully believe that it is in dealing with things and states of being that can’t be observed that the temptation to fit the nebulous object into a desired form and so to misrepresent it becomes strongest. Some might not like that but that doesn’t make it any less true.
And none of this makes the good act any less good. Unless someone is trying to discredit it, in which case someone else might be forgiven for suspecting they’re about as reliable as a cynical gossip columnist with an axe to grind. Given all these hurdles and the general lack of utility of knowing the motivation of a good act and the high potential to attribute bad motives unfairly, why bother trying to figure it out?
Subjectivity is a condition of everything we do and think, it is inherent in the fact that people are loci of experience and action. No matter what you do to attain objectivity you can't escape the fact that you are bounded by our, particular limits. Trying to attain objectivity might cause your boundaries to expand but you can't break out of them. You might say that you can with this or that technique of method but then you can't escape that it is you who have chosen that means, as well as the attempt. And they can carry the danger of adding a distortion that isn't apparent. No method is all encompassing, they are also bounded.
Olvlzl
Actually, when people explain their own actions, their explanation is as theoretical as that of the third party explaining the same action.
This has been shown through empirical research, where the variables explaining an action have been carefully isolated. The researchers can tell exactly when and why a person will perform an action. Yet, the agent himself cannot explain it. The agent confabulates a reason - guesses, really - and often guesses incorrectly.
For example, when given a choice from among a set of identical objects (e.g., pairs of socks), a person will pick one. The researchers know which one he will pick before he picks it. He will explain that he picked the sock because of texture or color or some other quality. Yet, these variables have been controlled for, and are not a part of the real explanation.
Anyway, speculating as to the beliefs and desires of agent is no more problematic than speculating about any other event. We speculate as to why the price of gasoline is going up (or down), the cause of the pain in our leg, the future worth of a company. When we speculate a person's actions we use available evidence to form a theory as to the agent's beliefs and desires, and from that we make predictions.
It is extremely important for us to be able to explain and predict the behavior of other people. Often, we live or die by the accuracy of those predictions. It affects the quality of our lives in a great many ways. We will continue to do so, with greater and lesser degrees of success.
Yes, it is true that people are disposed to come up with explanations that they want to be true. Thus, many theists explain atheist behavior in terms of 'they hate God' or 'they desire to do evil without guilt.' Yet, the tendency of people to come up with bad theories does not prove that we are utter failures in our ability to explain and predict the universe. That would suggest that our ability to explain and predict behavior is mere chance. I think that evidence suggests that we do better than that.
On the idea that subjectivity is a part of everything we do, this is probably true. Yet, on the level at which I write, I do not see that it is relevant. My objective is more to show that morality is on the same level as science. If science has a certain degree of subjectivity, that is fine. That hypothesis will not adversely affect this theory.
Alonzo Fyte, I didn't say that a person's explaination of their motives would be perfect or even reliable, just that they were the best hope for an actual witness. They were the only one who experienced their motive, no one else could. The empirical evidence you talk about would represent a range of accuracy, I'd imagine. Like any data gathered from different sources at different times. But, in the case of motives for the kinds of good actions we are discussing, judging the accuracy of that self-reporting would be impossible since the problem of not being able to observe something like that is insurmoutable. That is one of the biggest problems with any kind of evaluation of reported experience, even with imaging and chemical analysis, etc. the actual, internal experience cannot be observed and those physical manifestations are not what is experienced. The widespread idea that they are the equivalent of the experience itself is one of the more wide spread superstitions of educated people in the United States today.
Your point about the desireablity to predict behavior may be true but that doesn't make it possible, certainly not in any, individual case. Take the problems of observation and reporting and compound it with others. While that might be unfortunate, it doesn't make it one bit more possible or the premature attempt more wrong and likely to produce injustice.
My focus isn't certainty, in the extremely long literature of trying to find certainty very little has been produced. Living in a tollerably decent world is probably a more attainable effort.
I, by the way, wouldn't attribute any kinds of motives to a group of any kind because there would be a range of motives within the group and motives within an individual change. In any case, as you can see, I don't have any faith in the ability of people to know the inner heart of anyone. I judge by the results in real life that can be seen and the willingness of people who hurt others to change their behavior. That's more reliable.
As for people wrongly attributing base or dishonest motives, Daniel Dennett is no slouch when it comes to flinging that kind of thing. Read Gould's review of Darwin's Dangerous Idea for just a few well notated examples.
>> The widespread idea that they are the equivalent of the experience itself is one of the more wide spread superstitions of educated people in the United States today. <<
What? I doubt you can clarify this in a comment, but do you perhaps have a link to a site that argues this position? Because I can't see any way in which this is superstition. In fact it seems to be the exact opposite - verified and beyond doubt.
>> Your point about the desireablity to predict behavior may be true but that doesn't make it possible, certainly not in any, individual case. <<
Obviously desiring something doesn't MAKE it possible. But in this case it IS possible. Are you arguing that the only way to know why someone did something is to ask them?
Eneasz said...
What? I doubt you can clarify this in a comment, but do you perhaps have a link to a site that argues this position? Because I can't see any way in which this is superstition. In fact it seems to be the exact opposite - verified and beyond doubt.
I suspected that someone might not like this. Someone reporting to have had an experience like a dream or a desire is the only person who has the experience. They are the only direct witness of what that experience was. I'd have thought that was entirely clear. Explain to me how someone else could experience or "see" what the person who had that experience did.
The idea that an MRI, chemical analysis of blood, etc. during the experience can tell you some things about the physical condition of the person having the experience but short of the person reporting what their impression of the experience was no one would know anything about that. How do you think they get data about that if not by asking test subjects?
I've talked about this with a lot of people, some of them behavioral scientists and a shocking number of them not only believe that the MRIs and chemical analysis is the equivalent of witnessing the actual experience of another person but it takes quite a bit of explaination before they can even understand the point. But they can, if they break through an ingrained habit built up through insufficient thinking about this orthodoxy.
>> Your point about the desireablity to predict behavior may be true but that doesn't make it possible, certainly not in any, individual case. <<
Obviously desiring something doesn't MAKE it possible. But in this case it IS possible. Are you arguing that the only way to know why someone did something is to ask them?
The person who did something is the best witness to the motive, they are the only person who experienced the motive. Other people might draw conclusions based on observations of what was happeneing in the situation, or through other knowledge about the person doing it, but the actual motiviating reason wouldn't have been experienced by them. It isn't possible to have high reliability in cases when the person reporting their own experience are correct and honest about it. How can you get that information some other way?
I suspect that defining and finding motivations for what would generally be called good acts is often more difficult than acts that are selfish. The person who does something selfish is doing something for themself. That makes it a simpler matter to figure out what the motivation might be.
What isn't possible is having a high amount of confidence in finding the correct motive, defining what that would be and knowing what the reliability of the conclusion was. That might be unfortunate, but that doesn't lessen the problems of studying the unobservable.
Sorry, I left out a crucial word here:
It isn't possible to have high reliability in cases when the person reporting their own experience are NOTcorrect and honest about it. How can you get that information some other way?
olvlzl
It seems to me as if you are skeptical about a person's ability to know the mental states of another person.
Actually, I hold that it is quite easy to do so, and that we are very good at it. I know that my wife likes to have coffee in the morning, so I make her coffee. I don't drink the stuff. I think coffee is the most foul brew ever concocted. However, it will please her to come down the stairs in a few minutes and see hot coffee waiting for her.
I know that she believes she has to be at work at 7:30, and very much likes to be to work on time. I believe that my boss is expecting me at around 7:45.
I am confident that a huge percentage of my neighbors do not want to be shot, stabbed, or burned. Even though there is a range of motivation among my neighbors, if I assume this motivation I will be right more often than I am wrong.
They also do not like loud noises at 4:00 a.m., so, as I get up and get ready for work, I will not turn my stereo up as loud as I would play it, for example, on a Friday evening when I get home from work.
When I flip through the channels, I can fairly reliably predict whether my wife would want to see a particular show, and I will tape it for her. My record here is not perfect, but it is better than random chance.
I am certain that my wife believes that we live in Colordo, that the Earth is a sphere-shaped object orbiting the Sun and is orbited itself by a moon, that water is made up of H20, that we have a cat named Tsunami and Tsunami is a male bengal, that Christmas comes in December, and that the standard year is 365 days long.
I would go down a list of her likes and dislikes (desires), other than a fondness for cats and for coffee, but I know that she would not like me to talk about her too much.
I can do down a list of my boss's likes and dislikes as well.
I even have a long list of desires that I can reliably attribute to any person I meet on the street. Though it is not perfectly reliable, my understanding of human motivation is good enough to make my beliefs accurate most of the time.
It is, in fact, as easy to know the motivational states of another as it is to know one's own motivational states. The evidence I cited earlier that shows that people do not often know their own motivations . . . this does not only show that people sometimes make mistakes when assigning motivation to themselves. It shows that people use the same tools to assign motivation to themselves that they use in assigning motivation to others. They observe behavior, and they come up with the best theory (in terms of beliefs and desires) to explain that behavior.
The only difference between a person explaining their own behavior and explaining another person's behavior is not some 'special access' to one's own motivation. It is the fact that, when it comes to one's own behavior, one has collected a whole heck of a lot more data. I have huge quantities of data from which I can draw theories as to my own likes and dislikes. With that much data, it is far less likely that I will make mistakes.
Yet, as this scientific research points out, even adults continue to make mistakes when theorizing about their own motivation. People still operate in part on unconcscious motives as well as motives they cannot admit (even to themselves) that they have.
I am aware of philosophical arguments that beliefs and desires do not exist. Those arguments classify beliefs and desires as part of a primative 'folk theory' of behavior that needs to be replaced with a better scientific theory.
I am not averse to this view. My attitude is, "Fine. When you have that better theory, let me know. In the mean time, I have no choice but to continue to work with the best theory we have available today. As the evidence above indicates, beliefs and desires appear to be the best way to explain and predict human intentional actions. We appear to have a great deal of success with it. So, that is the theory that I will continue to use."
Some of your statements appear to make reference to the philosophical concept of 'qualia'. Qualia refers to the subjective experience of a mental event. A scientist can tell you about the c-fiber firings and the mechanics of brain processing, but cannot give you the 'qualia' or the feel of pain itself. It cannot isolate the badness of pain.
This is captured in, for example, Thomas Nagel's argument, "What's it like to be a bat?" and Searle's "Chinese Room" argument (which postulates a machine that can process chinese characters by a set of rules so that it appears to understand Chinese but does not do so).
These arguments are countered by modifications of Turing's test for artificial intelligence. One form of this argument states that if I can create a robot that nobody can distinguish from you, and I can explain how the robot works without talking about qualia, then there is no reason for me to believe that you or I experience qualia either. By occam's Razor, we may eliminate this from our ontology - we do not need it.
There is another problem with attributing private states to ourselves. How do we know what to call it?
Words in a language are public. We learn to speak a language by observing others who use that language. Private experiences are necessarily private. I learned the definition of 'red' by living in a culture where people pointed to various objects and used the term 'red'. If there is something to 'red' other than photons with a particular energy level striking the retina of the eyes and setting off a neural pulse - if there is a private 'experience of redness' - I cannot have a word for it.
There is no way to determine if my private experience of redness is identical to your private experience of redness. The only thing that we can know is that my public experience of redness matches your public experience of redness, and the public experience is all I learn when I learn the word.
Anyway, all of this suggests that, in spite of the possibility of error, I hold that the ability to reliably (though not perfectly) determine the beliefs and desires of others is on fairly solid footing. It is good enough for pracical purposes - good enough for us to get a benefit from, even with its imperfections.
Alonzo Fyte, you might be good a guessing what someone you know very well might be thinking but that's not the same thing as being able to know their motivation without asking them.
I'm actually pretty strict when it comes to using the word "know". I think it's quite clear that everyone relies on belief and even faith in the majority of their thinking and acting. No one has the ability to prove most of what they need in order to function in daily life, or even in their professional lives as scholars or scientists. Math comes closest but even a mathematician can't constantly prove to themselves everything they need in order to work, they rely on their belief that their colleagues are correct.
The habit of forgetting how much we rely on, entirely untested by us or undemonstrated, is most liable to become a serious problem the more complex the systems we're dealing with grow. This might be unfortunate but there really isn't any way around it. It's just the consequence of being a limited creature living beyond our limits.
My guess is that in the sciences those dealing with behavior and cognition are the worst offenders in pretending certainty and knowledge when what often typically happens is that the authority of someone else is accepted with little question.
'Negligence’. Moral theories that base moral judgments on the intentions of agents cannot account for negligence. The negligent person does not intend to harm others. Yet, because of his inattention, he does so anyway. Desire utilitarianism defeats intention-based moral theories because it can account for negligence. The negligent person’s fault is that he lacks a good desire, or that good desire is not sufficiently strong. That good desire would have motivated him to take precautions to avoid causing harm to others.
Not sure if this argument works against intention-based moral theories, if I understand them correctly. You say "The negligent person does not intend to harm others. Yet, because of his inattention, he does so anyway." Surely, also the negligent person does not intend to not harm others? The difference between a negligent and non-negligent person, in intention-based moral theory, would be that the non-negligent person does have a general intention not to harm others. This can affect how specific intentions are carried out, as in paying suitable attention that these acts do not harm others, or in other words, are not negligent.
So to paraphrase the last two lines of your quote above The negligent person’s fault is that he lacks the general intention not to harm others, or that general intention is not sufficiently strong. That general intention would have motivated him to take precautions to avoid causing harm to others. Now presented this way, I can't see how negligence can be used against intention-based moral theory, am I missing something?
Martino
Intention-based theories state that the moral value of an action is determined by the moral value of the intentions exhibited in that action.
It would seem that an intention that does not exist (the lack of an intention to prevent harm to others) would have a moral value of zero.
Yet, a negligent act has a negative moral value.
To what is that negative moral value attributed? How can it be attributed to something that is not present?
(Note: Desire utilitarianism does not say that the value of an act is determined by the value of the desires from which it sprang. It says that the desire of an act is determined by whether a person with good desires would have performed that act. The absence of a good desire can have moral value, because people generally have many strong reasons to promote that desire.
One could make the same claim about intentions. However, the question then becomes, "How does one promote a good intention?" Since intentions are the product of beliefs and desires, and beliefs are governed by the doctrine of truth, this means that the only way to promote good intentions is to promote good desires - and we end up in exactly the same spot.
Intention-based theories state that the moral value of an action is determined by the moral value of the intentions exhibited in that action.
Agreed
It would seem that an intention that does not exist (the lack of an intention to prevent harm to others) would have a moral value of zero.
Disagree that all absent intentions have a moral value of zero. Since:
Yet, a negligent act has a negative moral value.
To what is that negative moral value attributed? How can it be attributed to something that is not present?
By allowing that in the case of absent intentions, where were they present they would have a moral value, then their moral value in their absence is the negation of their moral value when present.
Note: Desire utilitarianism does not say that the value of an act is determined by the value of the desires from which it sprang. It says that the desire of an act is determined by whether a person with good desires would have performed that act. The absence of a good desire can have moral value, because people generally have many strong reasons to promote that desire.
One could make the same claim about intentions.
Agreed but then my point is made that intention-based moral theories can handle negligence and I am interested in what theories cannot.
However, the question then becomes, "How does one promote a good intention?" Since intentions are the product of beliefs and desires, and beliefs are governed by the doctrine of truth, this means that the only way to promote good intentions is to promote good desires - and we end up in exactly the same spot.
This is I believe irrelevant to the main point here
Post a Comment