tag:blogger.com,1999:blog-16594468.post2473591562341065791..comments2023-10-24T04:29:23.693-06:00Comments on Atheist Ethicist: A Desire that a Child be TorturedAlonzo Fyfehttp://www.blogger.com/profile/05687777216426347054noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-16594468.post-92047414181830623212016-09-03T23:42:08.127-06:002016-09-03T23:42:08.127-06:00I appreciate that my question was important enough...I appreciate that my question was important enough to get its own post,but I still have some questions. Desirism says that the right action is the one that a person with good desires will perform. Now this raises the question what are good desires? Good desires are desires that tend to fulfill other desires-in virtue of which the people with those other desires have reasons to promote that desire universally using the social tools of reward/praise and punishment/condemnation. Now let's imagine ourselves in a society where the majority of people desire to thwart the desires of a small minority and also the desire to see the desires of the minority group thwarted. Now when you add up all reasons for action-all desires, the majority of desires are that this minority group has their desires thwarted. Now desirism says the right act is the act that a person with desires that tend to fulfill other desires would perform. It seems then that the right act would be to thwart the desires of the minority. Now you might say "but the desire to thwart the desires of the minority is a bad desires, that is it tends to thwart other desires" however desires themselves are not intrinsically good or bad- what makes a desire good or bad depends on what the desires of others are. So in this case does the desire to thwart the desires of the minority tend to fulfill or thwart other desires? Well it certainly thwarts the desires of the minority but that is only a fraction of all desires that exist. The desire to thwart the desires of the minority fulfills the desires of the majority that desires that the desires of the minority be thwarted. Therefore in this case the desire that tends to fulfill the desires of others is the desire to thwart the desires of the minority. Let's assume the desire that the desires of the minority be thwarted is a malleable desire. Are there reasons to promote the desire that the desires of the minority be thwarted? It seems there are. Promoting the desire to see desires of the minority thwarted will thwart the desires of the minority of course, but it will fulfill the desires of those that desire to see the desires of the minority thwarted-which make up the majority of all desires. I can present this as an argument.<br />1. The majority of people desire to see the desire of a certain minority thwarted.(premise)<br />2.The desire to thwart the desires of the minority tends to fulfill more desires then it thwarts. (from 1-there are more desires that the desires of the minority be thwarted then there are that the desires of the minority not be thwarted.<br />3. The desire to thwart the desires of the minority is a good desire (from 2 and definition of good desire.<br />3. The right act is the one a person with good desires would perform.<br />4. A person with good desires would have the desire to thwart the desires of the minority. (From 2 and 3)<br />5. The right act is to thwart the desires of the minority. (From 3 and 4)<br /><br />Shaunhttps://www.blogger.com/profile/09466738560345213952noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-1596216426138722082016-09-02T07:53:56.944-06:002016-09-02T07:53:56.944-06:00//It's probably worth nothing that we're l...//It's probably worth nothing that we're likely talking about two different versions of the word "should." Your "should" is unconditional. My "should" is conditional. *If* you value something, then you purse actions that will lead to that something.//<br /><br />Actually, while we may be talking about different scopes, my "should" is conditional as well. "Should" can be roughly translated as "reasons exist to". So if we say "I should repay debts", it is the same as saying "Reasons exist to repay debts". However, while an agent only acts on his own desires, the reasons that an agent HAS are only a small fraction of the reasons that EXIST (the desires of others). So the way I am using "should" is conditional on the desires of the population, not just the individual. But it is still conditional.<br /><br />//I don't believe that the unconditional should has a basis in rational thinking.//<br /><br />We are in agreement here.<br /><br />//I didn't say that people all act to maximize their own utility, nor did I say that all decision are based entirely on them (some decisions are dare I say, irrational; others are based on heuristics). What I did say about utility is that they care about it.//<br /><br />Perhaps I have a different concept of "utility" then you. Can you please explain how you are using it? It sounds like you are using it similar to how I would use "desires-as-means" versus "desires-as-ends". In which case, you would be correct. One values utility as a means to an end-desire. <br /><br />So when you are talking about "maximizing utility", you are saying the same as "carrying out action plans that fulfill the most and strongest desires/preferences". Is that right?<br /><br />//That's my point! It's not a viable question to ask.//<br /><br />Okay, well I'm not sure why you mentioned this in the first place then. What does having only 1 option have to do with the topic at hand?Anonymoushttps://www.blogger.com/profile/02243523751399191317noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-84425460209309165412016-09-01T21:20:50.922-06:002016-09-01T21:20:50.922-06:00>The question I asked was why *should* we value...>The question I asked was why *should* we value utility. I ask because you seem to be implying that utility has intrinsic value, that it is the thing we should all strive to maximize.<br /><br />I don't know if you should value utility. If you don't, and you are surrounded by people who won't punish you for not acting like you do, then maybe you shouldn't. <br /><br />It's probably worth nothing that we're likely talking about two different versions of the word "should." Your "should" is unconditional. My "should" is conditional. *If* you value something, then you purse actions that will lead to that something.<br /><br />I don't believe that the unconditional should has a basis in rational thinking.<br /><br />>It is the case that I can possibly not value utility to myself. If I desire that someone else's life be preserved at the cost of my own life, what utility is that to me? <br /><br />I didn't say that people all act to maximize their own utility, nor did I say that all decision are based entirely on them (some decisions are dare I say, irrational; others are based on heuristics). What I did say about utility is that they care about it.<br /><br />> No utility is served by Alph pressing the button. <br /><br />Alph has more than 1 desire. His end goal (desire) is that Pandora continues to exist. His proximal goal is to press the button. He earns utility for the completion of each of these goals<br /><br />The only way to eliminate proximal goals is to set up a brain wave detector that deterimine's the desire of the person and then acts based on it, in which case, the entire Alph-brain-wave-detector system becomes the agent under consideration, and all previous objections continue to apply.<br /><br />Alph still cares about utility. In considering a plan in which he saves Pandora from being destroyed, he earns utility.<br /><br />If he didn't consider that plan, then he isn't an agent capable of intentional action (intentional action requires that a person imagines a set of future states that differ based on their action).<br /><br />>I'm not sure how this analogy applies. We wouldn't say a person falling out of an airplane "ought" to hit the ground (unless we just mean "it is the expected outcome"). <br /><br />That's my point! It's not a viable question to ask.<br /><br />>But the function of morality is NOT (as is usually presumed) to judge past behavior. It is to alter the likeliness of that decision being made in the future. <br /><br />I'm not sure how past and future behavior are related to what I was saying there. What I am saying applies to any event in which there is only 1 option.<br /><br />>Okay, I see you are extending the definition of "preference" to include a proposition and its negation. This is not usually how it is used — it is typically used in reference to two competing positive choices — but that is fine. We can just equate desires with preferences for the sake of this conversation.<br /><br />We can stick with the stricter definition if you want, but that will add needless complication because (A or not A) is essentially equivalent to (A or B) where B is any option that is mutually exclusive with A.<br /><br />>No, again, because the strength of a desire is based on biology, not utility<br /><br />But utility is also based on biology. So, a desire being based on biology is not an argument against it being based on utility.Pngwnhttps://www.blogger.com/profile/03883275062288086190noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-64940405772599119552016-09-01T13:55:42.358-06:002016-09-01T13:55:42.358-06:00@Pngwn
>Whose utility?
Anyone's. The ques...@Pngwn<br />>Whose utility?<br /><br />Anyone's. The question I asked was why *should* we value utility. I ask because you seem to be implying that utility has intrinsic value, that it is the thing we should all strive to maximize. The thing is, maximizing utility has NO INTRINSIC VALUE. One must first *value utility* in order to want to maximize it. Desires, on the other hand, are innate — they are part of our biology, they are the "is" — and are therefore foundational. I don't value my desires. They just are. I may desire to cease to exist. But ceasing to exist does me no utility.<br /><br />>The question of whether you should value your own utility is moot; you do, and will always, value your own utility.<br /><br />This is incorrect, and I think is the heart of the disagreement. It is the case that I can possibly not value utility to myself. If I desire that someone else's life be preserved at the cost of my own life, what utility is that to me? Take Alonzo's example of Alph on a distant planet with the moon Pandora orbiting it. Imagine Alph has only one desire, that “the moon Pandora continue to exist”. Alph then finds out that a giant meteor is going to destroy Alph. Alph is then presented with a button that will destroy the meteor, but at the cost of Alph's life. No utility is served by Alph pressing the button. In fact, all utility will immediately cease to exist. Nonetheless, Alph has every reason to press the button. If you are correct, that utility should be maximized, then Alph should not press the button.<br /><br />>And if you only can do one thing, asking whether you ought to do it or not doesn't seem like a viable question. Ought a person falling out of plane without a parachute hit the ground? …<br /><br />I'm not sure how this analogy applies. We wouldn't say a person falling out of an airplane "ought" to hit the ground (unless we just mean "it is the expected outcome"). <br /><br />When you say we can do only one thing, I take that to mean that based on one's current set of desires and beliefs, one could have done no other thing at that moment. But the function of morality is NOT (as is usually presumed) to judge past behavior. It is to alter the likeliness of that decision being made in the future. <br /><br />>From my perspective, because I prefer to not see you get hurt, I should try to convince you, through punishment or reward if necessary, not to say that word. Similarly, I should try to stop Kyle from erupting in violent rages.<br /><br />Okay, I see you are extending the definition of "preference" to include a proposition and its negation. This is not usually how it is used — it is typically used in reference to two competing positive choices — but that is fine. We can just equate desires with preferences for the sake of this conversation.<br /><br />>It seems to me that desires can be constructed entirely on top of utility, the same way that preferences can. If you expect a utility of 3 by performing action A, and a utility of 5 by performing mutually exclusive and exhaustive action B, then you desire B strongly, and A weakly; or you prefer B to A.<br /><br />No, again, because the strength of a desire is based on biology, not utility. I may strongly care for a piece of paper written by my grandmother and would go to great lengths to preserve it, but the paper brings me very little utility.<br />Anonymoushttps://www.blogger.com/profile/02243523751399191317noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-56628495356614577872016-08-31T20:04:52.022-06:002016-08-31T20:04:52.022-06:00>Why should we value utility?
Whose utility? M...>Why should we value utility?<br /><br />Whose utility? Mine, yours, Fred's? The question of whether you should value your own utility is moot; you do, and will always, value your own utility. And if you only can do one thing, asking whether you ought to do it or not doesn't seem like a viable question. Ought a person falling out of plane without a parachute hit the ground? They don't desire to hit the ground; they don't get utility from hitting the ground, but under both (or any!) moral systems, the state of the world will be the same. <br /><br />Now whether you should value anyone else's utility is a different matter. I can't give you an objective argument about why you should value another person's, but I can give you a subjective one.<br /><br />Example: Meet Kyle. Kyle's a weird person. He erupts into a violent rage and murders any person that he hears say the word "Satan." Thus, it is in your best interest to not say the word "Satan" around Kyle. So, if you prefer not to die, you should avoid that word. That's from your perspective.<br /><br />From my perspective, because I prefer to not see you get hurt, I should try to convince you, through punishment or reward if necessary, not to say that word. Similarly, I should try to stop Kyle from erupting in violent rages.<br /><br />But, in no case can I actually convince you to form one of my preferences without altering one of your already existing preferences through rewards and punishments. So, if you don't care about Kyle hurting people in the future, I can't convince you to help me stop him, except through a threat of punishment, or a promise of reward<br /><br />All I can do, as an outsider, is alter your future plans by informing you that helping me stop Kyle will increase your net utility (because I won't punish you, or because I will reward you). But I can't use an ethical system to force you to (which is distinct from convincing you of my ethical system, which would rewrite your preferences to be closer to mine).<br /><br />>Preferences are not motivations for action. They are discerning principles between different action-plans for fulfilling desires.<br /><br />It seems to me that desires can be constructed entirely on top of utility, the same way that preferences can. If you expect a utility of 3 by performing action A, and a utility of 5 by performing mutually exclusive and exhaustive action B, then you desire B strongly, and A weakly; or you prefer B to A.<br /><br />It also seems to me that any full set of desires can be ordered by preferance (Ex: A>B=C>D...), such that both preferences and desires are different ways of defining the same system (much like how you can count on your fingers or your toes, but in the end, it's a de-abstraction of numbers.)Pngwnhttps://www.blogger.com/profile/03883275062288086190noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-79211263784935169662016-08-31T10:51:23.252-06:002016-08-31T10:51:23.252-06:00@Pngwn
Why should we value utility? If there ar...@Pngwn <br /><br />Why should we value utility? If there are no intrinsic values (not desire fulfillment, not utility, not anything), what makes making the Utility Monster a "better" option? While I agree that based on the Utilitarian model, it is the correct choice, but what reason that exists or could exist to prefer Utilitarianism. <br /><br />And why should I care about others preferences? What reasons exist to change my mind? Desirism answers this by citing the desires of others as motivation to bring about change in you. Preferences are not motivations for action. They are discerning principles between different action-plans for fulfilling desires. This is a subtle but critical difference.Anonymoushttps://www.blogger.com/profile/02243523751399191317noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-42212600882275915272016-08-31T09:15:07.283-06:002016-08-31T09:15:07.283-06:00>Rather, the right act is the act that a good p...>Rather, the right act is the act that a good person would perform, where a good person is a person with good desires and good desires are desires that tend to fulfill other desires<br /><br />So, in desirism, a right act is that which a person with desires that tend to fulfill other desires would perform. <br /><br />>A utilitarian - seeking to maximize desire fulfillment - would say that two such villages is better than one because there is more overall utility. Desirism looks for reasons to bring the second village into existence from among the current desires. If there is none, then bringing the second village into existence has no value.<br /><br />Thank you, this is very clear.<br /><br />>Utilitarianism cannot handle Robert Nozick's Utility Monster - a hypothetical member of a community who gets huge amount of utility from the suffering of others.<br /><br />Sure it can. It answers that the utility monster should be created (as long as it is for some reason impossible to make a similar utility monster, or another set of agents, that could reap just as much utility without the suffering of others). <br /><br />It would be a poor equilibrium, but if the only options were (create torturous utility monster) or (not create any utility monster) than the first is the better option. The only reason people believe that this defeats utilitarianism is that it conflicts with our moral intuitions, which both you and I agree are less than reliable.<br /><br />To make the point more intuitive, consider a single human who must dissolve with acid a few thousand bacteria in order to synthesize a drug that will save their life. The drug gives them much more utility than the sum of the disutility of the bacteria, and thus the situation is equivalent to Nozick's Utility Monster.<br /><br />Yet our intuition in the first situation is the opposite of the intuition in the second situation.Pngwnhttps://www.blogger.com/profile/03883275062288086190noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-83361162416159120692016-08-30T14:37:36.412-06:002016-08-30T14:37:36.412-06:00There are two primary distinctions in utilitarian ...There are two primary distinctions in utilitarian theory.<br /><br />One distinction concerns the object of evaluation. Here, we have act-utilitarianism (the primary object of moral evaluation are actions) and rule-utilitarianism (the primary object of moral evaluation are rules and actions are evaluated by conforming to the best rules).<br /><br />The other distinction concerns what is to be maximized. Common versions here nominate pleasure and the absence of pain, happiness over unhappiness, and eudaemonea.<br /><br />Preference utilitarianism is most commonly understood as an act-utilitarian theory that maximizes preference satisfaction: the right act is that act that satisfies the most preferences. Peter Singer, for example, argues that we should be impartial towards the preferences of others - that satisfying more preferences of strangers is better than satisfying the fewer preferences of family members or close friends.<br /><br />Desirism rejects the thesis that the right act is the act that fulfills the most desires. Rather, the right act is the act that a good person would perform, where a good person is a person with good desires and good desires are desires that tend to fulfill other desires. If we took rule utilitarianism and modified it so that the proper object of moral evaluation are desires rather than rules, and desires are evaluated according to their consequences, this would be closer to Desirism.<br /><br />Still, it turns out that Desirism is not a utilitarian theory. Desirism denies the existence of an intrinsic good to be maximized. Even desire fulfillment itself lacks intrinsic value and is not to be maximized. Instead, desire fulfillment - like everything else - is good only to the degree and only for whom there is a desire for desire fulfillment.<br /><br />When desires are evaluated relative to other desires, the outcome is not desire maximization. It is a harmony of desires where different desires fit comfortably together. One person's interest in making tools fits with another person's interest in hunting which fits in with another person's interest in building homes which fits with another person's interest in making clothes - all while a flute player makes music in the background.<br /><br />A utilitarian - seeking to maximize desire fulfillment - would say that two such villages is better than one because there is more overall utility. Desirism looks for reasons to bring the second village into existence from among the current desires. If there is none, then bringing the second village into existence has no value.<br /><br />Utilitarianism cannot handle Robert Nozick's Utility Monster - a hypothetical member of a community who gets huge amount of utility from the suffering of others. Utilitarianism has to say that it is best to bring the utility monster into existence. Desirism, on the other hand, asked whether anybody has a reason to bring a utility monster into existence - and notes that, almost by definition - they do not.Alonzo Fyfehttps://www.blogger.com/profile/05687777216426347054noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-77585096154701496472016-08-30T13:46:30.459-06:002016-08-30T13:46:30.459-06:00It seems to me that Desirism essentially reduces t...It seems to me that Desirism essentially reduces to regular Preference Utilitarianism.<br /><br />I believe that, aside from the jargon, the concept of a Preference is identical to that of a Desire.<br /><br />So, the question then becomes, if I'm wrong, is there a conceivable question for which Desire Utilitarianism comes up with a different answer than Preference Utilitarianism?Pngwnhttps://www.blogger.com/profile/03883275062288086190noreply@blogger.com