It seems to me that "life" isn't an ultimate value (speaking everyday English here) so much as it's a basic value. A necessary value.
In the book, Harm to Others: The Moral Limits of the Criminal Law, Joel Feinberg called this type of value 'welfare value'. Welfare values are particularly important in desire utilitarian theory.
'Welfare values' are those things that are useful for almost anything else a person may desire as an end. They are, in a sense, nearly universal means. They include life (without which it is often quite difficult to fulfill one's desires). They also include health, true belief (or education), liberty, money, property, and help from others.
One of the standard objections against desire utilitarianism is that it is nearly impossible to determine what has value. According to this objection, there are simply too many things to consider to be able to draw a conclusion that something is actually good or bad.
This is a borrowed objection from act utilitarian theory. Act utilitarianism says that we are to perform the act that has the best consequences. Yet, who can determine all of the consequences of an act? A simple act that looks good from the surface – saving a child from a deadly disease – might have dire consequences. This child might grow up to be the next Hitler. So, according to act utilitarianism, the act of saving this child was the wrong thing to do – it was not the act that has the best consequences.
Desire utilitarianism is not concerned with actions except in a derived sense (the right at is the act that a person with good desires would perform). Desire utilitarianism is concerned with good desires – desires that tend to fulfill other desires.
Welfare goods identify a list of objects that are almost universally useful in fulfilling other desires. Desires to protect and preserve welfare values, then, would be desires that would tend to fulfill other desires. Desires to protect life, liberty would qualify.
In a desire utilitarian theory, a 'right to X’ exists where 'people generally have many and strong reason to promote a desire to provide people with X or, at least an aversion to depriving people of X'. 'Welfare goods' provide a good list of things to put in for 'X' in this concept of rights. Thus, we have a 'right' in this case to life, liberty, a minimal standard of living, an education, health care, and the respect of one’s neighbors.
'Rights' in this sense, are not absolute. There are a number of instances in which one right can come into conflict with another, or where a right might run up against the laws of nature. Events could come up where a good person might need to take the life of an innocent person. I have used an example in the past of a child making a purchase from a vending machine that will set off a nuclear weapon in a distant city. The 'right to life' says that the good person would have an aversion to killing the child. However, this aversion may reasonably be outweighed by the good person's desire to prevent the deaths of millions of people in a distant city.
In other circumstances, the good person's aversion to depriving people of liberty might well run up against the need to draft people into military to fight a particularly ruthless enemy, or draft them into service during an emergency. We have reason to promote an aversion to answering mere words with violence (freedom of speech), but we still have reason to condemn the person who would yell "fire" in a crowded theater and to threaten to punish such individuals.
It is also relevant in this case whether socialized medicine or free markets work better in providing people with food, education, and health care. If socialized medicine does not work – if it robs people of health care that they would have had in a free market system, then the 'right' to health care does not translate into a ‘right’ to government-provided health care, or welfare, or education. These consequences depend on the facts of the matter as to which system actually provides people with these welfare goods.
One of the implications of this is that much of the debate that people engage in when it comes to social policy actually makes sense. The debate as to whether markets or government-run systems best provide people with welfare goods is an important debate to have.
Desire utilitarianism disallows either side from claiming that, for example, government systems shall not be used because "They are just wrong." This is because nothing is 'just wrong' in this sense. Something is 'wrong' only in the sense that a person with good desires would not perform the action, and desires are evaluated on their tendency to fulfill other desires. 'Just wrong', in contrast, is an intrinsic-value claim. It is an appeal to an entity that does not exist. No moral argument grounded on a false premise (a premise that “just wrongness” is real and is found in a particular family of actions) is a sound argument.
Yet, it is relevant that a person just 'does not like' depriving others of their freedom of speech (for example). Of course, he needs to go a step further and argue that it is a good thing that people 'does not like' depriving others of their freedom of speech. This, he can do by arguing that an aversion to depriving others of freedom of speech will generally fulfill other desires, since restrictions on freedom of speech are typically abused by people who block the flow of information so that they can thwart the desires of others.
Act utilitarianism says that we are to optimize consequences to the best of our ability. This child might grow up to be the next Hitler. So, according to act utilitarianism, the act of saving this child may be a terrible, but is probably an excellent thing to do – terrible if and only if it is an act that has terrible consequences.
ReplyDeleteYour idea of act utilitarianism is overly simplistic. This theory acknowledges that what appears, with limited information, to be a good decision may in fact be catastrophic. An agent who uses good decision-making processes (ones which tend to optimize consequences) will sometimes make bad decisions - not just because he sometimes fails to apply the processes (though he probably does), but also because he can't foresee all the consequences. In this case an act utilitarian would probably give praise for the bad decision, because he wishes that the type of reasoning behind it would flourish even if it turned out badly in this instance.
Desire, rule, and pancake utilitarianism are merely subsets of act utilitarianism, when applied to the acts of promoting desires, rules, and pancakes.
There are two types of act utilitarian theories relevant to your comment. There is objective act utilitarianism (the right act is the act that actually maximizes utility), and subjective act utilitarianism (the right act is the act that appears from the point of view of the agent to maximize utility).
ReplyDeleteEach has problems in accounting for blameworthiness. Objective act utilitarianism makes no allowance for unforeseen consequences (e.g., the decision not to murder Hitler as a child). Subjective act utilitarianism gives a person no incentive to discover the right answer - is too forgiving of those who fail to maximize utility.
Desire utilitarianism is not the same as act utilitarianism in that desire utilitarianism says that the evaluation of desires is primary while the evaluation of acts is derived from the evaluation of desires. Act utilitarianism says that the evaluation of acts is primary. This is a substantive distinction.
Rule utilitarianism collapses into act utilitarianism because the rule utilitarian cannot explain why, where following the rule will produce less utility than breaking it, we should follow the rule.
Desire utilitarianism has no problem explaining why, where following the best desires will produce less utility, we should not act against those desires. The answer is: we cannot do so. 'Ought' implies 'can' and 'cannot' implies 'it is not the case that one ought'. People act so as to fulfill the most and strongest of their desires given their beliefs, and saying that he 'ought not' do so is a waste of breath.
Which rules out act utilitarianism.
I explained blameworthiness in the act-utilitarian sense: "In this case an act utilitarian would probably give praise for the bad decision, because he wishes that the type of reasoning behind it would flourish even if it turned out badly in this instance."
ReplyDeletePraising and blaming are acts and it seems to me that (objective) act utilitarianism properly accounts for them.
I think act and desire utilitarianism, properly defined, are really the same thing. AU requires a definition of utility, which amounts to the evaluation of fundamental desires (identifying what is intrinsically valuable). In both cases the evaluation of acts rests on this basic definition/evaluation. Once you've settled on a set of basic desires, you proceed in exactly the same manner, seeking the fulfillment of those desires (the maximization of utility). Any further emphasis on desires is shared by both theories (concern with good desires/welfare/etc). When I claimed that desire utilitarianism is part of AU, I was equating DU with "concern for good desires".
ReplyDeleteBy saying that "the right [act] is the act that a person with good desires would perform", you define 'right action' in terms of 'good desires', with the added confusion of what a person with such desires would do. AU simply tells you to maximize the good, with the implicit qualifier "to the best of your ability". What's the advantage of a less concise definition of right action? All it does for me is emphasize another sense of 'right action'. The moral theories are indistinguishable to me.
Hi Afungus Amongus (great handle btw)
ReplyDeleteWith regard to your argument that OAU adequately explains praise and blame, well it it does for OAU but not in DU terms and that is the point. Praise and blame are acts, this is not in dispute, the question is whether these are appropriately applied or not. The criticism of AU is that it has impossible expectations and so makes infeasible demands on how to act compared to DU.
This leads to your post #4. You appear to be confusing Desire Fulfillment Act Utilitarianism (DFAU) with Desire Utilitarianism. Your comments are relevant to DFAU but not DU. A number of brief points:
1. For our concerns here, DU is a better form of Rule Utilitarianism (RU) since as Alonzo noted, DU unlike RU does not decompose back to AU. You have not shown that DU is a type of AU and until you deal with this point your conclusion is invalid.
2. The problem with AU is over what consequences to maximize - usually some notion of pleasure, happiness or well being. Apart from the subjectivity of some of these notions, some theorists do focus on (objective) desire fulfillment such as Griffin, a type of objective preference satisfaction AU. Still the focus is on the impossible obligation to maximize consequences in terms desire fulfilling desires which is DFAU not DU. DU recognizes this impossibility by noting what people do, in fact, is to seek to fulfill the more and stronger of their desires. So it uses praise and blame to encourage or instill desire fulfilling desires and discourage or extinguish desire thwarting desires. As Alonzo noted making desires rather than acts primary is a, if not the, substantive difference between AU and DU. Your equating these by them sharing a "concern for good desires" makes this question begging for AU - how is good defined? - whereas DU makes this quite clear how this works. This looks like a (subtle?) form of equivocation on your part
3. You appear to be imposing AU spectacles on understanding DU. Hence your point about DU appearing (to you) more confusing and complicated than just focusing on acts. DU makes the argument the other way around, it is AU that adds complications (as well as the impossibilities noted above) and that it is easier to focus on desires - specifically via their material effects on other desires - that is simpler. DU is a more concise definition (derivation) of right action.
4. Yes DU also has the qualifier of "to the best of your ability" but this is implicit in every decent moral theory but in DU is explicitly acknowledged and with the recognition of its pragmatic limitations.
hi martino (thanks :) ),
ReplyDeleteI suspect your idea of AU's unreasonable expectations comes from a misunderstanding about what it means to 'maximize utility (to the best of one's ability)'. In one sense it means to attempt to find and execute the best of all possible actions at any decision point. Even if there is a best possible action this expectation is obviously impossible because the only way to know an action is the best is to compare it to an infinitely long list of possible actions.
In a more realistic sense it means to immediately execute the best of all clearly available actions, bearing in mind that looking for better actions is itself an action with its own set of expected consequences. After a sufficient amount of research and brainstorming you should pick the best of the obvious actions. Experience gives you some idea of how much thinking is useful. This perfectly reasonable expectation is what I take 'maximize utility' to mean. I think it is a fair interpretation of an ambiguous phrase which should be explicitly stated in any explanation of act utilitarianism. I don't think the atheist ethicist does it justice.
1. The point about DU being inescapably true strikes me as odd. It seems like DU must fail as a prescriptive moral theory if it is simply a description of how we in fact make all of our choices. It seems plausible that humans' actions always "fulfill the most and strongest of their desires given their beliefs". This seems to be the definition of strength with regards to desire (though something may be said for the role of desire-neutral instincts acting alongside desires). In any case it does nothing to justify the moral aspect of DU, which equates right action with "the act that a person with good desires would perform". Calling two different theories by one name and justifying only one theory is equivocation. Sure, desires determine (or at least strongly influence) acts and vice-versa, but how is this a moral theory?
2. You could deduce from my AU+(descriptive)DU position that I have some explaining to do. As Alonzo said, "People act so as to fulfill the most and strongest of their desires given their beliefs, and saying that he 'ought not' do so is a waste of breath". I won't argue that they ought not to do so. I will argue that given the beliefs arising from an evaluation of fundamental desires, the most and strongest of their desires will be to maximize utility.
I think any human who deeply evaluates his fundamental desires finds that he has inherently preferable states of mind, and (if he's not isolated) that these happen to occur in others also and are called 'happiness'. He finds nothing else of fundamental importance except for 'unhappiness' - more states of mind. If he believes in the existence of states of mind analogous to his own occurring in other people and in other times (perhaps aided by empathy), he combines all this happiness into what we call 'utility' and becomes a (objective, act) utilitarian. Which part of this is inconsistent with descriptive DU?
3. Relying on 'how people with good desires would act' complicates the definition of right action. Not only does it beg the question (what is 'good desire'?), it invokes unspoken (and seemingly false) assumptions about human nature. Doesn't instinct come into play independently of desire, even in our most rational decisions?
4. I find it fairly easy to state the 'best of your ability' qualifier within AU framework. Because you'd expect it from any decent moral theory, there are times when it can be suppressed. Whether the statement is implicit or explicit depends on how fully you want to explain the theory. I don't see how this is a fault of AU, though emphasis on achievability may be a virtue of DU.
Hi again Afungus Amongus
ReplyDeleteWith regard to 'maximize utility (to the best of one's ability)' you say "I think it is a fair interpretation of an ambiguous phrase which should be explicitly stated in any explanation of act utilitarianism." I agree, but it, for now, still is ambiguous and this is an issue that AU theorists need to resolve, rather than rely on different people's -such as yours - subjective (?) interpretations. Still your intrepretation does not answer the key point that Alonzo would make that we generally do not go about evaluating actions in terms of any utility - which you have still left quite unclear - this is not how humans work as a study of cognitive psychology or the philosophy of action would show, and on which Alonzo bases this approach. The impossibility I was implying was not just theoretical but pragmatic and you have not - yet - dealt with this.
1. "The point about DU being inescapably true strikes me as odd." Huh? This is certainly not a position I hold, nor Alonzo. We are both looking for the best possible current theory, should a better one come around I would change to it. Where is there an argument that DU is "inescapably true"?
Instincts are not desire-neutral, they are fixed desires as opposed to malleable desires, the distinction is important in DU since only the latter could be the focus of a moral system.
"Calling two different theories by one name and justifying only one theory is equivocation. " DU is the collective label for this approach and yes there are a number of theories underlying this. Where did you get the impression there is only one theory as you see it? Have you read about this at all?
"Sure, desires determine (or at least strongly influence) acts and vice-versa, but how is this a moral theory?" When stated that way it is not yet a moral theory, nor is it claimed as such. The desire fulfillment theory of value and the externalist reason to act theory of prescriptions (as I name them Alonzo might differ here) serve as the basis for a moral theory - which is a subset of these - but these themselves far broader than that.
2. "You could deduce from my AU+(descriptive)DU position that I have some explaining to do." Yup :-)
"I think any human who deeply evaluates his fundamental desires finds that he has inherently preferable states of mind" Hang on a sec, the central thrust of DU is that it gives us a means to evaluate desires (desire-as-ends specifically) and you have not shown any alternative, let alone one that is better, here.
For the rest of your point 2, I am sorry but this looks like optimistic hand waving. I am neither an optimist nor a pessimist but strive to be a realist here. How does one ascertain these subjective states of happiness? Cant see how it can be done in a remotely reliable, empirical and unbiased fashion. Desire fulfillment avoids this problem entirely by explicitly and specifically focusing on the material affects of desires on all other relevant desires - that is it emprically, provisionaly and defeasibly shows how to evaluate desires rather than rely on subjective error-prone judgements or guesses about happines.
"Which part of this is inconsistent with descriptive DU?"
Happiness is about (subjective) states of mind, desire fulfillment is about (objective) states of the world.
3. "Relying on 'how people with good desires would act' complicates the definition of right action." You keep on saying this yet all I see is that your own responses (defense?) of AU are far vaguer, variable and subjective than anything in DU!
"Not only does it beg the question (what is 'good desire'?)" well this clearly defined in DU whereas an AU pragmatically maximal 'good action' is a far more problematic and subjective as you appear to be showing here!
"it invokes unspoken (and seemingly false) assumptions about human nature." Well speak them, what are these hidden and false assumptions?
"Doesn't instinct come into play independently of desire, even in our most rational decisions?" Huh? Instinct is a type of or subset of desire and is taken into account in DU. What theory of action are you using to make these points?
Finally you still have failed to address a question made by Alonzo and reiterated by myslef that, for our purposes here, DU is a form of RU that does not degerate into AU and so solves the challange that lead to the creation of RU but which it failed at, or are you now conceding that DU and AU are substantively different?
afungus amongus
ReplyDeleteFor the record, Martino does an excellent job of defending Desire Utilitarianism - sometimes better than I.
Still, I think that I would like to express a response to some of your issues in my own words.
First AU's 'unreasonable expectations' comes from the fact that humans strive to act so as to fulfill the most and strongest of their own desires. The only type of person who can have act-utilitarian best actions as their sole end (or objective) is a person who has only one desire - the desire to do the act-utilitarian best action. He can have no desire for sex, no food preferences, no affection for any particular person, no friends, no aversion to pain or any other preferences regarding comfort or discomfort.
If he had any desire other than the desire to perform the act-utilitarian best act, then this other desire (under certain circumstances) would cause him to sacrifice act-utilitarian utility for this other good. He might choose to have sex when sex does not maximize utility. He might choose to prefer the happiness of his children over the happiness of a stranger half-way around the world that he has never met. He might eat a pizza where the act-utilitarian best act would be to have a hamburger.
It does not matter how much thinking a person does before acting, his action will be the action that fulfills the most and strongest of his own desires, given his beliefs. A person cannot act contrary to his own desires (though, clearly, more and stronger desires can outweigh weaker and fewer desires). If this is possible, please explain how.
(1) Desire utilitarianism is built on certain psychological descriptive facts (e.g., each person seeks to fulfill the most and strongest of his desires, and acts so as to fulfill the most and strongest of his desires given his beliefs).
I do not agree with the claim that there is a mutually exclusive distinction between 'descriptive' and 'prescriptive' claims. Some descriptive claims are not prescriptive (e.g., oxygen atoms have eight protons). Others are both descriptive and prescriptive at the same time. "A should do X" means "If A would do X, then this will result in a state that fulfills the desires in question."
The context in which the sentence is used determines the desires in question.
Some "should" statements are not moral. (If you do not wish to be recognized while robbing a bank, you should wear a ski mask). Others are moral. The difference is that moral statements refer to the acts that a person with good desires would perform, whereas non-moral statements relate to some subset of desires which need not be good desires.
Still, both types of statements are, at the same time, descriptive and prescriptive.
(2) I think that it can be shown that the proposition, "People seek to act so as to fulfill the most and strongest of their desires," provides a better account of intentional actions than "People seek to act so as to maximize (their own?) happiness." The latter is disproved by repeated examples in which people perform actions that thwart their own happiness, but which realize something else that has value to them (realizes a state that fulfills a desire).
The first and major problem with any 'happiness' theory is that none of them can handle the 'experience machine' problem. Put a person in an experience machine and make him think that he is living in a world where he thinks he is a famous and well-loved celebrity and he would be happy. But it would not be real. Many people say that they would prefer real-world experiences to this type of false happiness. Desire utilitarian theory explains this choice - because desires are fulfilled when a state is created in which the proposition that is the object of the desire is made true. The experience machine can make only a subset of those propositions true. It only gives the illusion that other desires are fulfilled, and illusions are not good enough.
The second problem with happiness theories is that they cannot explain self-sacrifice, like throwing oneself on a grenade. Trying to explain this in terms of, "This will make me happier than any other option" seems to be a stretch. Explaining it in terms of, "I need to protect my friends" is far simpler.
A third problem comes from evolutionary theory itself. If you were to program a machine to avoid certain deadly states of affairs, which form of programming is easier? Would you program the machine to, for example, recognize changes in temperature and to stay in an area where the temperature is within certain limits? Or would you program the computer that links being within a certain range of temperature to some mysterious undefined quality called 'happiness' and then program the computer to 'maximize happiness'.
You can program an antelope to run from the lion because lion presence makes him unhappy and he somehow realizes that greater happiness can be acquired by moving away from the lion. Or you can program the antelope with the disposition to run from lions (without any intervening thoughts about 'happiness'). The second is the simpler and most reasonable account of the relationship between antelope and lions, and between us and many of our likes and dislikes.
(3) Complications are wrong only if unnecessary. Modifying mass according to a relationship to the speed of light certainly complicates physics, but it gives more accurate results than less complicated forms. The question, "What is a good desire" is not "begged". It is clearly answered - a good desire is a desire that people generally have reason to promote. The only reasons that exist are other desires. So, a good desire is a desire that tends to fulfill other desires. The type of circularity one finds in this is called 'recursive' or a 'virtuous circle' which is also found in coherentist empistemology, logic, math, and any physical system that reaches and maintains homeostasis through feedback mechanisms.
Glad to hear from you again Alonzo! You both raise new and important points which I can't possibly reply to all at once in any depth. I'll start by looking at the intro sections, then in later posts I'll do 1-2-3-4. Feel free to respond before I'm finished working through your previous posts in order to counter any point I make or to direct me to something I should respond to. I promise I eventually will, and in the meantime I hope the back-and-forth doesn't get too confusing xP
ReplyDeleteA bit of (subjective) interpretation is necessary to flesh out a resilient moral code from a few memorable phrases. So I hope you'll allow 'AU' to abbreviate 'my subjective version of objective AU' just as I'm using 'DU' to say 'your subjective version of DU'. Hurts the eyes less.
Regarding the point that people generally don't think in terms of utility: happiness and pleasure come to mind quite frequently among ordinary people. When they think about how good a cinnamon roll would taste, how much they would enjoy sex, how much a broken leg hurts, or how emotionally painful it would be for your family if you were KIA, they are thinking in terms of utility. When they consider the despair of obesity, their partner's pleasure, the totally rad feeling of skating on a roof, or how happy you would be as a grunt, they are also thinking in terms of utility. Such instances of utility may occur to the average person whilst he evaluates actions such as eating a cinnamon roll, shagging his woman, doing super cool skateboard moves, and joining the army. In many situations people focus on happiness rather than desire, even if their sentiments lead to or result from desire. Everyone is at least cognizant of some instances of utility. AU only asks that people consider those instances that seem to be relevant to a given decision; at some point the utility of considering more consequences is outweighed by the utility expended by making these considerations. Experience gives you some idea of how much consideration is useful... perfectly reasonable expectation... blah blah blah.
The connection between human nature and desire hardly rules out AU. To the contrary, our natural tendency to value the happiness of everyone we care about - because we desire it in and of itself - greatly facilitates the practice of AU. The hart part is to care about everyone equally, but even this difficulty is well within our powers of empathy. Since AU would have you pragmatically focus your efforts on people whose preferences you already know (especially yourself), AU is well approximated by commonsense ethics.
On to Alonzo's explanation of the 'unreasonable expectations' of AU: that AU precludes any desires other than maximizing utility. Not quite. AU precludes fundamental desires besides those defined as utility. But it sets up a hierarchy of extrinsic goods (including desires) which are valuable in terms of their impact on intrinsic goods. A person who values (all) happiness as his sole end will tend to value sex, food, affection, and friends, but only insofar as they bring happiness to people. These are things that he truly desires only when they are means to the fulfillment of his fundamental desire (his sole end). Comfort, pleasure, and painlessness are intrinsically good (they are ends in themselves) - this is why I classify them as types of happiness.
Now our hypothetical AU agent may on some level desire say, chocolate, but know that it would make his breath funky. If he reasons that the sensation of delicious Hersheys bar is worth the risk of disgusting everybody on an elevator, and that all other effects on happiness are orders of magnitude smaller than these, then he will gobble himself some chocolate. It may be that the AU 'best act' was for him to kill the guy next to him in the elevator since that guy is actually a terrorist. But like I said before AU has no expectation that our agent perform this 'best act'. It only asks that he do the act with the best expected consequences - eating his chocolate. It would be immoral for him to kill the terrorist (mistaking him for an ordinary dude) even though it was the 'best act'.
How is it possible for our agent not to act out his most potent desires (which, because he's done a thoughtful examination of his value hierarchy, happen to coincide with maximization of utility)? Suppose he instinctively murders the elevator terrorist without thinking, without any desire to. Maybe he has an uncontrollable spasm and the terrorist is morbidly allergic to chocolate. It was an accident; he failed to act intentionally. All he wanted was to enjoy his chocolate. But instead he killed a man.
In a sense he acted contrary to his desires, but I don't think this scenario is a serious objection to DU. All you have to do is define the agent's identity so that his action is a set of electrical signals fired out of his brain. The rest is all unintended consequences of the action. The neural impulses were perfectly in line with his desires. I concede that with proper definition of identity, instincts and other unintentional actions cease to be problems for DU.
DAMN it is late. That's all I can muster for now. Hope I've said something worth reading. Hooray for rational discussion.
1(martino) I haven't made any effort to prove that DU is a type of AU because I've been trying to pin down what, if any, prescriptive statements DU makes. Alonzo in his first response said:
ReplyDelete"Desire utilitarianism has no problem explaining why, where following the best desires will produce less utility, we should not act against those desires. The answer is: we cannot do so."
Here Alonzo claims that we inescapably follow the best desires. If the best act is the one which a person with the best desires would perform, then we inescapably perform the best act. If you define 'good' this way, you get nihilism. The only way his statement can make sense is if he's carelessly using moral language to make amoral statements; if he meant 'strongest' rather than 'best'. In these terms his statement expresses what I call 'descriptive DU'. So where's the moral content?
If the desire fulfillment theory of value can't be explained and clarified here, how can it be useful? I mentioned a way that desires are fundamental to AU (deciding what to maximize) and tried to compare this broader conception of AU (broader because it includes the act of defining utility), versus DU. From what I gather, DU is entirely compatible with if not identical to AU.
Alonzo sez: some claims are simultaneously descriptive and prescriptive. Yes; every prescriptive claim is descriptive also. When President Bush tells us to be vigilant, he's describing a preference of his and an option of ours. But you can distinguish between claims that are purely descriptive (Saddam Hussein has WMD's), and those which have a moral element (invade!). Descriptive statements, when presented in a certain context, may have moral implications, but they are by definition amoral. In this way (pure) description and (any kind of) prescription are mutually exclusive; this is the meaning I attach to 'descriptive DU' and 'prescriptive DU'. Fair enough?
We can define the morally relevant sense of 'should' in terms with the appropriate dependence on moral theory: "I think A should do X" means: "I think A can do X and I want A to do X". In case we need such a definition.
You appear to equate 'good' with 'fulfillment of desire'. But you leave out key words like 'net fulfillment' and 'maximize' that would specify what exactly is good and how you could, in theory, evaluate actions. People invariably fulfill their own strongest desires, but presumably you equate 'good' with 'fulfillment of (any) desire'. How do you mediate between conflicting desires? Maybe you maximize 'desire-strength-fulfillment' - multiply the degree of fulfillment by the strength, and sum over all affected desires. Is this your theory of value? Am I on the right track? If I haven't shown a competent grasp of DU ethics, it is because I've never heard of DU before. So help me out - all I see here are the faintest twinklings of a moral theory. What is the fundamental reasoning behind prescriptive DU and how does it differ from mine?
Hi Afongus Amongus
ReplyDeleteFirst let us take a step back. From Mackie combined with Griffin (in my view anyway I don't recall Alonzo mentioning Griffin) Alonzo obtained a definition, equivalence or reduction of generic good to "such as to fulfill desires of the kind in question". It depends on what the desires of the kind in question as to what type of good we are referring to. The only sensible and plausible definition of moral good within such a desire fulifillment theory of value is that moral good is "such as to tend to fulfill all relevant desires" - that is anyone that is affected by the agent acting to fulfill their desire desire. Similarly moral bad is "such as to tend to thwart the all relevant desires". There seems to be no justification to only focus on one subset of these desires without relying on fictional and hence fallacious argument. Whether you agree with this definition or not is moot, as it is still the case that empircally these other desires are materially affected - this is not a coherentist nor subjective model. That is this approach makes reference to and only to real features of the world - desires (certain brain states), actions and their material affects (on desires). You could still argue that this is just descriptive DU.
However when looking at prescriptions one has to look at the reasons to act, since all prescriptions are providing reasons to act. And to find reasons to act that exist - if they do not exist then they are an unsound justification for a prescription. Unless shown otherwise desires are the only reasons to act that exist. Hence all prescriptions refer to the fulfillment and thwarting of desires and nothing else. If you want to use something else then you need to show it is a fact not a fiction.
The above shows that DU shows how prescriptive statements are already made and enables one to provide an analysis of anyone's prescriptions - are they relying on fictions and invalid reasoning and so on? Using DU prescriptively makes it immune to these challenges (with a caveat below). Then the challenge becomes instead on issues such as has one properly identified the relevant desires, how well can these be speciefed - in terms of their material affects and so on - it becomes a framework within which moral reasoning and analysis can occur. Just because Alonzo,myself or anyone else applies this does not mean we are right as it would to be a provisional and deafisible recommendation, but DU clearly shows the limits and type of review and revision that are legitimate and those that are not (and those too are open to review of course - this is the caveat).
"Here Alonzo claims that we inescapably follow the best desires. If the best act is the one which a person with the best desires would perform, then we inescapably perform the best act. If you define 'good' this way, you get nihilism." No we follow the more and stronger of our desires, whether they are the moral best or not. So if one succeeds at providing a suitable social environment where people have internalised a set of good moral rules as desires, then they will still follow them even if there is some utility that shows these are not the best rules in that situation. In other words this is a type of RU that does not suffer from the problem of classical RU of degenerating into AU.
"If the desire fulfillment theory of value can't be explained and clarified here, how can it be useful? " The above should make this clear plus I fail to see how you could conclude this even given the limited information provided by myslef and Alonzo. Whatever :-) I am glad you are interested in this topic.
"I mentioned a way that desires are fundamental to AU (deciding what to maximize) and tried to compare this broader conception of AU (broader because it includes the act of defining utility), versus DU. From what I gather, DU is entirely compatible with if not identical to AU."
Now I disagree that we we are uility opimizers - see for example Gigerenzer. That and other more recent cognitive psychlogy data refutes the classical rational expectations models. So it is not just the case that AU requires one to have one and only one desire "to maximize happiness or some othe utility" even if that were possible our brains just do not work that way instead we seek to fulfil the more and stronger of our desires - using all sorts of methods including all sorts of heuristics and other shortcuts and only some of which, and only some of the time could be considered some sort of utility maximisations. Indeed the irony of AU is that the evidence is that such maximizers are unhappier that satisficers - making AU even more infeasible! Since how can everyone maximize happiness when the process of maximizing makes everyone unhappy (relative to other methods)! AU is inherently infeasible since it contradicts our empirical knowledge of how decisions are actually made.
"How do you mediate between conflicting desires?" This is the basic framework to see and resolve these of course. This framework makes it possible to do whereas others obsfuscate or avoid the challange. This is an emprical approach so it cannot gurantee success but takes one further in dealing with moral challenges thatn any other approach, including AU, that I have examined. And yes many situations are resolvable. Alonzo has numerous posts on this maybe he could provide a suitable link, I suggest the one triggered by my conversation wiht db0 - can remeber what is was called. Now how does AU deal with this?
"If I haven't shown a competent grasp of DU ethics, it is because I've never heard of DU before. So help me out - all I see here are the faintest twinklings of a moral theory. What is the fundamental reasoning behind prescriptive DU and how does it differ from mine?"
Given that you have shown interest and have been willing to spend some time on this I humbly suggest that Alonzo points out some key posts of his that might illuminate you.
I have some posts of my own but my blog is currently in hibernation. Still you could look at http://impartialism.blogspot.com/2008/06/sense-of-right-and-wrong.html and go back from their :-)
Hi AF (again!)
ReplyDeleteYou raise many interesing points but I want to pick on just one (further) one. Intrinsic goods or value. We hold that there is no such thing. There are means and ends - desires-as-means are instrumental and desire-asends are final not intrinsic. All values that exist are extrinsic that is relational. Desires are not intrinsic values, nothing is an intrinsic value. The states of affairs that desires keep or bring about are valued, the methods and actions to realize these states of affairs valuable, both what is (dis)valued and (dis)valuable are related to the desires of the kind in question.
I humby ask that if you think anything has intrinsic value to please show it rather than assume or assert it.
'ello martino (not sure AF is the best two letters to call me by considering they're Alonzo's initials... how about 'Fungo'? :P)
ReplyDeleteI'll start at the bottom, and see where that takes me in the interest of continuity I might skip around. You say all values are extrinsic, but also that some values are 'final' and 'ends'. Bwuh? That's the definition of intrinsic: valuable in and of itself. Of course values are related to desires: 'value' is being used as a synonym for 'desire'! We each are more or less aware of a hierarchy of values, some of which are fundamental/intrinsic/final/ends while the rest are derived/extrinsic/means. The alternatives seem absurd: circular loops of value; endless chains of value; completely intrinsic value. Alonzo advocates one of the above.
3(Alonzo) "a good desire is a desire that people generally have reason to promote. The only reasons that exist are other desires. So, a good desire is a desire that tends to fulfill other desires."
I think you need to take hierarchy into account. Suppose I want cookies and Bob will give me some in exchange for pocket lint, Sally will swap pocket lint for pennies, and Jim will trade pennies for fingernail clippings. I now desire cookies, lint, pennies, and fingernail clippings. Now suppose you present me with (A) a plate of cookies and (B) a fingernail clipper, but will only give me one or the other. My desire for A is good because it would certainly fulfill my cookie lust. But my desire for B would be better because it would invariably fulfill all four of my desires.
I claim that the above conclusion is absurd because desires actually form a hierarchy. Every object in my example was only valuable insofar as it led to cookies. Thus, desires A and B were about equally good even though A tended to let other desires become moot whereas B tended to fulfill them. B might be preferable when you consider extraneous factors such as the option of flipping out the little nail file in the clippers and stabbing you and Bob so I can take all the cookies. But you get the point ;)
I suppose an infinite hierarchy structure is possible, but I believe my example shows that at the very least some notion of relative fundamentality is needed.
2(Alonzo) I have thought about the major objections to AU, and they all are basically the same argument: AU recommends extreme actions in extreme situations, therefore AU is absurd!
Experience machine: yes, experiences are really what count. If this is the Matrix, but we can't escape or find out, we are really no worse off than if we weren't in the Matrix, and we have no reasons to act differently. If we could engineer utopia Matrices and all plug in while the superhuman robots manage our resources and keep doing science for us and we knew that people could plug in and unplug at will without harm, why not jack in?
Certainly some people would remain in the real world. The criticism that AU isn't a perfect model of human behavior applies equally well to prescriptive DU: people do not always strive to synergize their desires, but they should.
Utility monster (I know you didn't bring this up, but it is interesting and relevant): yes, total happiness is really what counts. If we had some way of knowing that person X felt happiness and sadness billions of times stronger than the rest of us, we'd have good reason to value his happiness far above ours. We'd also have good reason to find ways to copy his gift so everyone could feel happiness so intensely.
Mere addition (ditto): yes, total happiness is really what counts. People whose lives are barely worth living contribute positive goodness by the definition of life being 'worth living'.
Argument from altruism: Sometimes self-sacrifice is worth it. Are you confusing utilitarianism with egoism?
Argument from self-defeating practical issues: A robot with the intelligence to understand the bigger picture will be far more successful in unplanned situations. He may shove the context away on hard disk to free up RAM for other stuff, but 'maximize happiness' is much more useful than 'keep temperature comfy' to a robot unless he happens to be an air conditioner.
AU presents no self-defeating practical issues. When explicit consideration of happiness would be self-defeating, the way to maximize happiness is to run on autopilot. Hopefully before then you've set up an autopilot that approximates AU reasonably well. Experience gives you some idea of how much consideration is useful...ZZZ
afungus amongus
ReplyDeleteI want to thank you for some interesting questions. It has been a while since I have had somebody press me to defend these ideas. It is always a useful exercise.
An 'intrinsic' property is, by definition, a property that is wholly contained within the entity it is a property of. It makes no reference to anything extrinsic or external. So, intrinsic propoerties (making no reference to anything external) and relational properties (describing a relationship to something external) are mutually exclusive categories.
Value is a relationship between the object of evaluation and a set of desires - something external.
I do not argue that 'value' is synonamous with 'desire. In fact, I would deny that relationship. 'Value' is synonamous with 'reasons for action'. It just so happens to be the case that desires are the only reasons for action that exist. People have postulated all sorts of other types of reasons for action - reasons derived from pure logic, reasons that God or nature have built directly into the universe - but none of those suggestions actually refer to anything real.
(If you go far back into my writings - 2 or 3 years or more - then you will find instances in which I do say that value is synonamous with desire. I was wrong.)
As far as value being necessarily related to desires, most analysis of value would deny this relationship. G.E. Moore argued that there is a distinction between what is desired and what is desirable, and that position is still widely accepted (perhaps universally accepted) among value theorists.
The specific relationship that I describe . . . a state of affairs S is good only insofar as there exists a desire that P and P is true in S . . . is almost unheard of.
Desire utilitairanism holds that there is a distinction between value-as-means and value-as-ends. The former has value because it brings about a state in which P is true. The latter has value because it IS a state in which P is true. However, it is still not the case that S's value is intrinsic. S's value depends on an external fact (the desire that P), without which the value will cease to exist without any change in S.
The experience machine problem for happiness theory does not begin with, "Assume that you are in an experience machine but do not know it and cannot find out." It begins with, "Assume that you are NOT in an experience machine but you are given an option to enter one," or "Assume that you are in an experience machine but you have found out and you are given an option to leave."
People choose not to enter an experience machine even when it promises them greater happiness, or choose to leave for the same reason.
It does not matter that you can find exceptions. Some people do, in fact, desire mostly those things that an experience machine can make real. The problem is coming up with a theory that explains why some people choose the lesser happiness of reality versus the greater happiness of the machine. Happiness theory does not explain why people sometimes choose the option with the less happiness. Desire fulfillment theory does.
Desire utilitarianism does not say that people always synergizes their desires. It says that they always act so as to fulfill the most and strongest of their desires given their beliefs, and seek to act so as to fulfill the most and strongest of their desires. Synnergizing desires is useful only if it fulfills more and stronger desires than not doing so. Sometimes, it is not going to be worth the effort, or the cost, in which case it is not the case that a person should synergize their desires. However, it is still true that people have reasons to promote desires that fulfill other desires and inhibit desires that thwart other desires.
I agree that sometimes self sacrifice is worth it. The problem is in explaining how. Desire utilitarianism does so by saying that a person can have a 'desire that P' such as 'a desire that my children are alive'. This desire gives them a 'reason for action' to realize states of affairs in which P (my children are alive) is true. Happiness theory has a lot more work to do to get to the same result. Ultimately, all else being equal, the simpler theory is best.
The only thing my robot needs in order to "keep temperature comfy" is a thermometer, the ability to move, and some programming that that says, "If it is too hot, move in the direction of where temperatures are cooler - if too cold, move in the direction where temperatures are warmer." Give me an example using happiness that is as simple - particularly given the problem of specifying exactly what happiness is. What is it that you are having your machine measure and how does it measure it?
As for 'running on autopilot' - you have just introduced yet another entity into your ontology that you need to explain. What is 'autopilot'? How does it work? How do you set one up?
Desire utilitarianism does not need such an entity.
hi again. I've learned a helluva lot from you guys in these posts and in many ways I'm seriously impressed by DU. Your account of morality as pragmatic advice for people who are all slaves to their desires is the context in which I now understand AU.
ReplyDeleteDefining 'intrinsic', 'relational', and 'value', that's all well and good. I see your point about ends being such due to certain relationships, and therefore not being 'intrinsically' valuable. But you do acknowledge a basic unit of value - the conjunction of X with a desire for X, for any event or situation X. My list of synonyms was a tad imprecise but the essential idea of a value hierarchy (a set of ends and means where the value of means lies entirely in the ends) we both agree to be the case.
A dichotomy for you:
(1) Everyone always performs the DU-best actions possible.
(2) Someone sometimes performs a DU-suboptimal action.
I feel like you've been dancing between these prongs. If (1) then your theory is amoral, or at least it fails to provide useful moral advice. If (2) then your point that people sometimes perform AU-suboptimal actions has been hypocritical. Why do you expect moral theories to be 100% accurate accounts of human behavior? That would undermine the purpose of morality - to guide action.
Statement you dismissed:
"If this is the Matrix, but we can't escape or find out, we are really no worse off than if we weren't in the Matrix"
Corollary:
If we were to enter the Matrix (even if we could never escape or find out) we would be no worse off.
The problem of explaining why some people prefer that humans have 'worse' experiences (in favor of some non-experiential value) is one of psychology and sociology, not ethics. Likewise for the problem of explaining why some people would choose actions that a person with 'bad' desires would perform. Both are related to ethics, but neither is a valid objection to the relevant moral theory. We only expect that advice from a moral theory be practical, not that it be unavoidable.
"However, it is still true that people have reasons to promote desires that fulfill other desires and inhibit desires that thwart other desires."
And it is still true that people have reasons to evaluate their desires and look for patterns there (e.g. that all of them are real, experiential, and happiness-increasing). A large part of morality is the setting up of one's autopilot (their subconscious desires or instincts) - based on rules and patterns - as a substitute for impractical reevaluation of ends/fundamental desires. An autopilot is even more necessary for DU (as construed above) than for AU because the impact of actions on desire synergy is much less obvious than on happiness. More importantly, DU so construed is a form of rule utilitarianism and as such reduces to AU. So this cannot be the moral theory you call DU. What is it then?
A person can desire that his children be alive, but he would probably desire that they die rather than suffer tremendous pain or cause great evil. A rule such as 'maximize happiness' may in fact sum up one's desires on the matter. If this is the case it is much simpler to give the rule than to enumerate every instance you'd apply the rule.
My happiness robot is just your staycool robot, but with an arbitrary number of measurements all related by one variable corresponding to the robot's fundamental goal. In the case of exactly one measurement they are identical. The rules for weighing measured variables need to be sufficiently broad in scope that none are in unresolvable conflict, but an overly broad rule ('fulfill your programming') would be useless.
'Maximize happiness' is my rule. It is useful because it narrows down from the list of possible desires those which I happen to hold and gives the rules for weighing various desires against each other. Morality is an inherently subjective matter, but I know that your rule is NOT 'fulfill Alonzo's desires' (useless: too broad) and it is NOT 'synergize Alonzo's desires' (devolves to AU: too specific). What is your rule?