Some false beliefs are harmless and can lend themselves to casual debate. Some beliefs are dangerous, and dissuading people of those beliefs takes on an added importance.
Moral sense theories fall into the latter category.
Evolutionary ethics is not the only moral sense theory that is out there. Many religious views of ethics put them in the same category. They hold that God has written a true moral code into their brain. Reflecting on how they feel about a particular action is considered pretty much the same as asking God, "Is this right or wrong?" If it feels good to them, then this is taken as God's permission to go ahead and do it.
Common subjectivism is another moral sense theory. Only, the common subjectivist denies that there are any external moral values to sense. Consequently, one's feelings can never be mistaken - there is no fact of the matter to check them against. This, too, leads to the conclusion that "if it feels good to you, then do it."
Evolutionary ethics says that we evolved a faculty for sensing moral properties that we can consult to determine right from wrong. Though individuals might suffer from the occasional "moral illusion" (similar to optical illusions), this sense organ can still be trusted to be reliable.
What all of these theories have in common is that they tell people to answer moral questions by turning their attention inward - by asking and answering the question, "How do you feel about this?"
These internal theories of morality stand in contrast to external theories that say that, to determine moral facts, you have to look outside of yourself at the real world.
Desire utilitarianism, for example, is an externalist moral theory. Instead of looking inside yourself and asking how you feel about something to determine right from wrong, you need to look outside of yourself and ask whether people generally have reason to promote or discourage such a feeling.
If you are perfectly comfortable with the thought of torturing somebody, abusing a child, lynching a black, locking the members of a particular religion in a church and setting fire to it, herding Jews into death camps, enslaving a race, exterminating the Native Americans, raping, stealing, lying, or engaging in reckless conduct that puts others at risk, this is not morally relevant. What is relevant is whether people generally have reason to promote or discourage that feeling.
The reason that moral sense theories are not just wrong, but dangerous, is because it tells people like those listed in the paragraph above that they can trust their feelings when it comes to measuring the morality of their conduct. In telling them this, it gives them a moral permission to act on those feelings. In the cases listed above, this is not a good thing.
Internalist theories of ethics are fine to the degree that an agent has an aversion to harming others, a desire to tell the truth, and aversion to breaking promises, a desire to repay debts, a fondness for liberty, an aversion to punishing innocent people, and the like. That is to say, internalist theories of ethics are fine for people who are already good.
However, it represents terrible advice when given to somebody who lacks a certain amount of virtue.
We can assume that, among the desires that most people have a desire to do that which is right (or, perhaps more commonly, an aversion to doing that which is wrong). If a person has such an aversion then all we need to do is to point out that X is wrong and he will acquire a motivating reason not to do X. It might not always be a sufficiently strong motivating reason. However, in some cases, it will be.
Now, we tell such a person that to judge whether X is right they need to focus their attention on their own feelings. "To determine the morality of your actions you should look inside yourself, at your moral sense, and determine if you are comfortable doing X. If you are comfortable with it, then it is permissible, and your aversion to doing that which is wrong should not be triggered."
Or, we can tell such a person, "How you feel about performing these actions is not relevant. What is relevant is whether people in the world have reason to encourage or discourage people from having those feelings. If they have reason to promote an aversions to this type of action, then you should consider the action to be wrong, and your aversion to doing wrong actions can be triggered."
Of these two options, the first option is going to get people defrauded, robbed, raped, murdered, enslaved, and otherwise abused. The latter option has the potential to reduce some of those frauds, robberies, rapes, murders, enslavements and abuse.
The moral question is not, "How do you feel about this?" The moral question is "How should you (and everybody else) feel about this?"
Moral sense theories – telling people that they can judge moral qualities by measuring their feelings – are not only wrong; they are dangerous. Externalist theories that tell people to look outside of their own feelings at the reason for action that other people have are a little safer.
Just a final note: I am not arguing that a proposition should be considered false if it is dangerous. I am arguing that false propositions exist on a scale - some false beliefs are more dangerous than others, and we have legitimate reason to be more concerned about dangerous false beliefs than harmless false beliefs.
23 comments:
...the common subjectivist denies that there are any external moral values to sense.
...
This, too, leads to the conclusion that "if it feels good to you, then do it."
I think that's an unfair simplification. You say "if there are no external values, then you must depend on feelings", but feelings aren't the only thing for an internalist to lean on, and what "feels good" implies an even smaller concept.
Your implication is that if you ask a subjectivist what you should do, they'll answer "whatever feels good". They could just as well respond "what's your goal?".
...
I realize subjectivism might not be a "moral code" in the sense you're after, but I still think you should be careful about misrepresenting other viewpoints. It really hinders the pursuit of truth in general.
Yes whilst I might have been complaining about Kevin misrepresenting DU, I too also think that Alonzo often over simplifies subectivism as he has arguably done here. I also hold his critique would still stand given a more sophisticated subjectivism (from what I have read of his in the past). Still can anyone here give a decent and concise version of sophisticated moral subjectivism to be addressed?
Note I fully support and agree with Alonzo's point against moral sense, as opposed to the arguably rather naive representation of moral subjectivism.
Alonzo, since Desire Utilitarianism is an externalist moral theory, how is the "correct" moral choice arrived at when its method of determining right from wrong results in a stalemate?
Let's say Mark (a Desire Utilitarian) is preparing to go back to school to finish his studies as a physician; a profession for which all would agree he is especially well suited. However, his mother has recently taken ill, and he's concerned that she will not receive the best possible care, and her property and assets may be at risk, if he is not there with her to manage these things personally. These concerns may be managed by other relatives, but not as well as he might like.
Both Mark and his college friends (also Desire Utilitarians) have carefully assessed the pros and cons of Mark's leaving college to look after his mother and her affairs, versus his returning to school and leaving these concerns in the care of others. Unfortunately, both parties have come to very different conclusions regarding which decision would indicate desires that tend to fulfill other desires. Each party believes their idea of good desires is to be praised, and the other camp's condemned.
Mark's friends think he should go back to school, since the desire to help others (a desire that tends to fulfill other desires) is best expressed by Mark's completing his education on schedule and becoming a physician. And besides, his mother will get along well enough without him there. While Mark has come to the conclusion that a desire to help others is best expressed by promoting the desire to sacrifice one's career aspirations in favor of helping one's own family.
I think that both Mark and his friends must also be relying on their own emotions (an internalist approach) to determine what they think is "right" or "wrong" in order to have come to hold opposing views on the matter. I think we would both agree that emotions (desires) serve as the impetus to take any sort of action, and that we should use reason (a logical thought process, based on evidence and argument) to decide what we should do. But then you say that we must look outside ourselves to make the final determination. At least that's what I think you're saying.
It seems to me that the psychology of moral decision making actually involves a constant tension between reason and emotion. Mark entertained all the same arguments during his decision making process that his friends defend, yet ultimately found them wanting. Mark first had the desire to help others, thought about how that desire would best be expressed in order to fulfill other desires, thought about how those other desires might themselves fulfill yet further desires in others, repeated this process when considering the fulfilling of each desire and its descendant desires that would also be fulfilled, and finally made his decision.
At every step of the process, Mark experienced an emotional reaction to his consideration of, and conclusion about, the effect of each of those desires being fulfilled, including how they would affect others who looked to Mark as an exemplar of moral behavior. If my assessment of moral decision making is correct, then people are all constantly moving back and forth between reason and emotion, regardless of the purported methods of whatever ethical system they claim to follow.
Alonzo wrote: 'The moral question is not, "How do you feel about this?" The moral question is "How should you (and everybody else) feel about this?"'
__________________________
The problem - I feel, the problem endemic to moral philosophy in general - is that the latter question is often determined by one's asnwer to the former question.
when we argue that a particular moral theory is correct, we argue this by showing that application of the theory leads to results in accordance with our moral sympathies. (The best way, for instance, to argue against a moral theory is to show that its applicaiton justifies sometkhing that is 'obviously wrong').
And the same applies for arguing about "what we should morally feel/think in x situation." Those who sympathies lie one way will answer in a way that supports their moral sympathies. Others, who symathize differently, will argue the "should" that supports their moral sympathies.
(If you doubt this, perform a thought experiment: how could you argue your moral case to someone who does not in any way share your moral sympathies? The only hope you have of convincing others of moral shoulds is if those people share at least some moral sympathies in common with you.)
Thus, the way I see it, the question "how should we all feel morally in x situation?" is dictated by our individual answers to the question "How do I feel about x situation?"
Question from the Studio Audience:
What happens when all desires become perfectly malleable?
Suppose that I invent a magic brainwashing device that lets me rewrite the desires of every person on the planet (including myself) to whatever I choose. This includes those that are not amenable to normal social forces.
What does DU have to say about such a situation? This is not a completely idle question, because the technology to physically modify and reprogram human brains to hold arbitrary desires will actually exist someday!
Still can anyone here give a decent and concise version of sophisticated moral subjectivism to be addressed?
I can't speak for all subjectivists, but I think a decent summary might be "if you decide to do it, then do it".
It sounds tautological because it's a response to a question that (in subjectivist terms) isn't worth asking, like a "no solution" in math terms. The question, of course, is "what should I do?".
By contrast, most people phrase subjectivism in negative terms, not neutral terms. For example, "don't worry about anything", "do what you want", "do what feels good", or "it's right to me". Instead of being negative, subjectivism is absolutely neutral.
Now since it says so little about "how to live", I think any kind of ethical system will fit into a subjectivist framework, but there's a caveat that the ethical system is only "useful", not "right". Some form of big-picture utilitarianism seems like a great ethical system to me.
BTW, I think "ethics" are more compatible with subjectivism than "morals", because morality seems to have connotations of being "true" and "right" where ethics just claims to be useful.
What happens when all desires become perfectly malleable?
That's a good question. I think a lot of systems seem so put-together because they never expect a "perfect hand" like that. At those corner cases, they start to feel dystopian, like in "Minority Report".
"Still can anyone here give a decent and concise version of sophisticated moral subjectivism to be addressed?"
I am not sure how "sophisticated" it is, but let me try.
I see moral subjectivism the way JL Mackie did: it is the negation of moral objectivism. Morals are not properties of the "out there" world and, as such, they are properties of the subject's mental world.
Put further, we may feel or think certain things to be right and good, but we are in error if we think there is any objective reason compelling others to see it the same way. No matter how knock down or drag out our argument is for the rightness of a particlar thing, that argument will always be our opinion of the mattter, not objective fact.
Subjectivists are not, as David seems to be getting at, nihilists. We have as strong ideas about moral oughts as anyone else, and we judge others and ourselves by standards. We also try to convince others to see things morally as we do.
What subjectivists can't do, though, is to think that any moral system - even our own - has necessary import beyond our own subjective minds. We may wish it were different, but in the absence of any great suggestion on how to detect the moral properties that some allege exist in the world, we see morality as a product of individual subjects making individual judgments.
Hiya Kevin. Quick thoughts from me:
we may feel or think certain things to be right and good, but we are in error if we think there is any objective reason compelling others to see it the same way.
Alonzo's said a few times that he never said it's possible to argue something into being moral. And that, in fact, he's has stated the exact opposite - you cannot convince someone to do good by logic. You must change their desires, and the use the tools of praise/reward & condemnation/punishment to do so.
in the absence of any great suggestion on how to detect the moral properties that some allege exist in the world, we see morality as a product of individual subjects making individual judgments
Part of the purpose of Alonzo's blog is actually demonstrating how to objectively determine moral properties. The great suggestion is that there are reasons to promote desires that tend to fulfill other's desires, and reasons to inhibit desires that tend to thwart other's desires. To say that there is no good suggestion of a way to determine moral properties is to say that this is another wrong suggestion. Based on observing the evidence in the real world, I think that this method is completely correct - or at least correct enough to work for all practical purposes.
What happens when all desires become perfectly malleable?
Suppose that I invent a magic brainwashing device that lets me rewrite the desires of every person on the planet (including myself) to whatever I choose.
I believe the answer would be that no one should ever use such a device except in the most extreme and obvious circumstances. IE: installing an aversion to rape/murder, or an aversion to responding to words with physical violence, or installing a love of truth. But the majority of people shouldn't ever be touched with such a device, and even extreme cases should only have a few extremely important desires/aversions installed/removed.
The reason for this is because a human will be in charge of this machine. As we all know, humans are corruptible, and the best way to avoid corruption is to not allow such power in anyone's hands. Furthermore, humans are fallible. We can make mistakes when determining which desires are good and which are bad. So the only desires that should ever be touched are those that we are absolutely certain about. Such as an aversion to murder.
Eneasz wrote" "The great suggestion is that there are reasons to promote desires that tend to fulfill other's desires, and reasons to inhibit desires that tend to thwart other's desires. "
I understand that to be Alonzo's argument, but if so, I think Aonzo is confusing "convincing" with "objective." One thing I think wrong with much moral philosophy is that some theorists confuse coming up with rhetorically convincing reasons for x with the thought that they've shown x's objectivity.
Objectivity means "independent of subjective factors." But Alonzo's reasoning that seeks to justify the rightness or wrongness of certain actions is completely dependent on whether the hearer of such arguments judges them to be in accord with her sympathies. (His arguments will not be convincing to a kantian, a natural law theorist, a nihilist or a sociopath, becuase his reason are not objective, but are totally dependent on aceepting the premises of his argument).
So, Alonzo does not seem to me to be able to prove the objectivity of morals (that they exist indepdendently of subjective judgment), but only that moral acts can be bolstered by reasons.
Kevin
"I see moral subjectivism the way JL Mackie did: it is the negation of moral objectivism.
Interesting I would not regard Mackie in particular as a good representative of typical (sophisticated) moral subjectivism. He rejects (what I label following Nagel) narrow moral objectivity - "intrinsic prescriptivity" - but he also rejects narrow subjectivism, which is where you seem to be arguing from, he quite specifically argues for the objecivity of value in pretty much the sense Alonzo, eneasz, I and others argue for here, this a third alternative to moral subjectivism and moral objectivism.
Morals are not properties of the "out there" world and, as such, they are properties of the subject's mental world.
You are relying on a false dichotomy. "morals" are properties of the relation between what is out there and in here.
"Put further, we may feel or think certain things to be right and good, but we are in error if we think there is any objective reason compelling others to see it the same way."
This is not what Mackie argues for. He says we are in error if we think that there are objective, as in human independent, features of the world that make things right and wrong e.g "intrinsic prescriptivity". He spends half his book investigating objective reasons what we could call extrinsic and relational reasons.
No matter how knock down or drag out our argument is for the rightness of a particlar thing, that argument will always be our opinion of the mattter, not objective fact.
OK now we get to some notion of moral subjectivism - here it is and can only be a "matter of opinion". My initial response If this is all it is, then surely this is moral nihilism which you have already responded to with:
"Subjectivists are not, as David seems to be getting at, nihilists. We have as strong ideas about moral oughts as anyone else, and we judge others and ourselves by standards. We also try to convince others to see things morally as we do.
But you have denied (unlike Mackie) any objective basis to do this and whether one is successful or not at convincing others is nothing to with the facts of the matter since you deny that there are any facts, just opinions. Within the constraints of human capacities this means anything goes and the most convincing argument wins. I still see this as nihilism, morality as a set of competing fictions with no facts to evaluate them, the best or scariest story teller dominates.
"What subjectivists can't do, though, is to think that any moral system - even our own - has necessary import beyond our own subjective minds." I agree but this seems to be a peculiarly distinct feature not shared by any other enterprise. Your justification is implicit in:
"Granted its perceived importance We may wish it were different, but in the absence of any great suggestion on how to detect the moral properties that some allege exist in the world, we see morality as a product of individual subjects making individual judgments.
You are looking for an alternative in the wrong place, we agree it is an error to look for it purely outside ourselves but you stop to soon. Our subjective thoughts do not exist in vacuo our thoughts and feelings are dependent both on our inner brain states and outer sates of the world and this is amenable to empirical investigation.
The biggest puzzle to me on moral subjectivism is the strange insistence that (as others have said) it is dependent upon only beliefs and desires - it is this "only" that I reject. Now I see this as implicit in what said above but I do not know if you agree with this formulation or could present an equivalent formulation which we could pursue further. This is what I would address as sophisticated moral subjectivism where any instantiation or versions based on this are, for our (at least my) purposes here, just details.
Here's a corner case in which DU gives a rather... unusual... result.
Consider a universe that contains only one agent. In this universe, oddly enough, all desires are either bad or neutral.
Suppose that the agent has desires D(1) through D(N). Now consider the effect of adding a new desire, D(N+1), to the agent. Adding this new desire cannot increase the level of fulfillment of D(1) through D(N), because the agent is already acting as best it can to fulfill them. Any change in behavior relevant to the desires that already exist can only weaken their fulfillment, because the agent is already acting to fulfill them to the best of its ability. Therefore, in a universe of one agent, adding a new desire is always bad for the old desires, and as such, no desires are ever good.
(Note that this only applies to desires that are terminal values, not instrumental values. If one has a desire to survive, and survival requires eating when hungry, then one will eat when hungry even without a terminal value that says "eat when hungry." Indeed, the desire to eat when hungry can sometimes conflict with the desire to survive. One could be offered poisoned food, for example.)
Doug S.
Very good.
Though, at this level, the challenge is not to show that there are circumstances in which the desire fulfillment theory of value has strange results. The challenge would be to show that some other type of motivating reason for action other than desires must exist.
As it turns out, there are exceptions to your claims.
Assume that an agent has a desire to reach destination D. Also, as it turns out, when a desire is fulfilled it produces a jolt of pleasure, and one of the agent's desires is a desire for pleasure.
In this case, a new desire - a desire to travel to D (as an end in itself, and not just as a means to get to D) would have positive value. It would fulfill the desire for pleasure during the trip, and not just when the agent reached D.
The trick here is that the agent has a desire Dx that cannot be fulfilled without the fulfillment of some other desire. In this case, it would be useful for the agent to have that other desire.
Subjectivists are not, as David seems to be getting at, nihilists.
For the record, I meant a different "nothing"; not that we believe nothing, but that we aren't constrained to believe much of anything. The actual conclusions of subjectivism, relativism or what-have-you are important but actually very subtle.
I agree with most of what Kevin says, except in my statement I tried to under-emphasize how much we try to convince others, because people seem to have trouble with the concept that it's okay to have strong, well-reasoned opinions and not treat them as fact.
David wrote: "For the record, I meant a different "nothing"; not that we believe nothing, but that we aren't constrained to believe much of anything."
I see. My apologies. Agreed.
Ah, yes, you're right. I didn't take into account meta-desires (a desire that is fulfilled, directly or indirectly, by the existence of other desires). If desire D(1) is a desire to have desire D(2), then adding desire D(2) to the agent will, indeed, tend to increase overall desire fulfillment.
There's another important question that DU doesn't entirely answer.
What desires do people actually have? The various influences that have shaped our brains have left us with a horribly complicated mess that we barely understand. If we actually knew what we wanted well enough to say it, a Literal Genie wouldn't be a problem for most fictional characters. (In many ways, computers are like a literal genie: they are so stupid that they always do exactly what you tell them to do.) If we knew exactly what it was that we valued, well, we wouldn't spend so much time arguing about it, for one thing...
Hi Doug S.
Can I give a different answer to Alonzo to your "corner case"?
Consider a universe that contains only one agent. In this universe, oddly enough, all desires are either bad or neutral.
Trying to make sense of this. They cannot be internally (self-referentially) "bad or neutral" or they would not be desires as such, at least the relation to the objects of desire would be different - they would not be motived to bring any of these states of affairs about. So they can only be externally "bad or neutral" that is "bad or neutral" in relation to each other. Since there is one agent we are talking about some sense of prudential evaluation of any desires against its affects on all other desires of the agent. OK?
Suppose that the agent has desires D(1) through D(N). Now consider the effect of adding a new desire, D(N+1), to the agent. Adding this new desire cannot increase the level of fulfillment of D(1) through D(N), because the agent is already acting as best it can to fulfill them.
Given my understanding the agent, till the new desire, (prudentially) should not be fulfiling any prudentially bad desire and only the prudentially neutral ones - there is no issue over those. Still the agents is also seeking to fulfil the many and strongest of their desires - and some of these might be (prudentially) bad.
Any change in behavior relevant to the desires that already exist can only weaken their fulfillment, because the agent is already acting to fulfill them to the best of its ability.
Given my understanding this is more likely than not to be a (prudential) benefit to the agent because:
Therefore, in a universe of one agent, adding a new desire is always bad for the old desires,
In this particular universe this could be a benefit to the agent, the new desire itself may be itself prudentially neutral but if it becomes one of the more and stronger of the agent's desires and therefore other prudentially bad desires are now not fulfilled, (there is no necessary need for these demoted desires to be selected them because they are prudentially bad, just that they become one of the weaker desires that is not fulfilled) then the new desire is prudentially good with respect to the agent's whole set of desires.
and as such, no desires are ever good. As you can tell from my long winded analysis above I do not think your conclusion follows.
Alonzo
Following up your response to Doug S.'s "corner question".
First I am puzzled as to why you do not invoke prudential arguments for cases like this, even if my particular analysis misread Doug S in this case. Is there a reason for this?
Assume that an agent has a desire to reach destination D. Also, as it turns out, when a desire is fulfilled it produces a jolt of pleasure, and one of the agent's desires is a desire for pleasure.
So you are saying - given below - that the desire to get D (D1) is an instrumental desire, a desire-as-means(means for some other unspecified desire-as-end about destination D)? And there is another desire (D2) for a jolt of pleasure which is a desire-as-end. OK?
In this case, a new desire - a desire to travel to D (as an end in itself, and not just as a means to get to D) would have positive value.
Why is this, let us call it D3, an end in itself? Surely it is a desire-as-means to fulfill the jolt of pleasure desire-as-end D2? (It has positive value, either way).
It would fulfill the desire for pleasure during the trip, and not just when the agent reached D.
So you re saying here that D2 is fulfilled when desire-as-means as well as desire-as-ends are fulfilled. Fine.
However all I see is that there are now two instrumental reasons to get to D. Both are desire-as-means, one fulfilling the unspecified desire-as-end that the desire-as-means D1 servers which is conjoined with one instance of D2. And another desire-as-means which server is to fulfil another instance of D2 (the destination D is irrelevant to this instance the journey is relevant).
Also with means-end rationality without an end there is no motivation for means, so desire-as-means do not really exist they are explanatory constructs?
Must be missing something here.
The trick here is that the agent has a desire Dx that cannot be fulfilled without the fulfillment of some other desire.
I wonder how this all maps onto Railton's classification see Richard Chappel's Structure of Dyanmic Desire
In this case, it would be useful for the agent to have that other desire.
No argument with the conclusion, just the reasoning to get there.
Doug S.
There's another important question that [you think] DU doesn't entirely answer. ;-)
What desires do people actually have? The various influences that have shaped our brains have left us with a horribly complicated mess that we barely understand. If we actually knew what we wanted well enough to say it, a Literal Genie wouldn't be a problem for most fictional characters. (In many ways, computers are like a literal genie: they are so stupid that they always do exactly what you tell them to do.) If we knew exactly what it was that we valued, well, we wouldn't spend so much time arguing about it, for one thing...
This is the sort challenge that Griffin tried to meet with Informed or Rational Desire Fulfilment Theory, postulating an Ideal Observer that could rationally examine every desire and filter out the ones that would actually provide the expected fulfilment (including but not only inner satisfaction). This is the basis for the popular (but non-economist) "ideal preferentialism" version of preference satisfaction.
However it does not work in reality. You give one of the reasons.
We are not fully conscious of all our desires nor can we even if we are conscious of one be necessarily be able to literally state it. That is just the way we are. The challenge is deal with living and flourishing knowing this and try to do the best we can. How? Well DU provides a framework for such a best effort in the sense that alternatives do not give the pragmatically best results and can be shown to be flawed independently and with respect to DU (it explains why they are flawed).
You could always assume that the answer is somewhere in between the two extremes... this seems to be a good general rule in situations like this.
Aristotle, anyone? Or by virtue of it being ancient this wisdom is therefor useless?
Post a Comment