I have another question from the studio audience today.
[F]rom a desire utilitarian perspective, what is justice? As a theory of value, it seems pretty clear that desire utilitarianism has an answer to that question. I'm just not sure how to approach the question of what, for example, a judge would do in order to make a just decision, or how policy-makers would begin to structure a just law. Are these questions that make sense from a desire utilitarian perspective?
The Value of Justice
The questioner has agreed that desire utilitarianism provides an adequate theory of value. If this is true, then this means that for justice to be good, it has to be something that fulfills good desires (something a person with good desires would promote). Furthermore, the fulfillment of desires is the only type of value that exists.
Furthermore, justice is something that fulfills good desires by definition, the way that ‘useful’ refers to something that fulfills at least some desires by definition. If a set of institutions fails to fulfill the relevant desires, this does not imply that justice is bad. It implies that a particular situation is not just.
Types of Justice
Another set of facts that I want to throw into this analysis is that there are two major families of ‘justice’; retributive justice, and distributive justice.
Retributive justice is concerned with determining and inflicting the appropriate levels of punishment for legitimate crimes. If an individual is punished for something that is not a legitimate crime, is punished too harshly or not harshly enough, or has had his guilt or innocence determined by illegitimate means, then he may rightfully claim to have been treated unjustly.
Distributive justice has to do with the distribution of wealth in a community. Distributive justice can either concern itself with the final distribution of goods and services (who has what)? Or it can concern itself with the rules for acquiring, holding, and transferring property (including labor) from one person to another without regard to the final outcome, as long as the rules are followed.
For one essay, I do not have time to speak to both types of justice, so I will speak to the type alluded to in the question from the studio audience – distributive justice.
History of Justice
In the case of justice, I think that it is useful to understand what justice is by going back to its roots. Besides, I must admit that I like this story because of what it says about putting religious symbols on government property.
Justitas was an ancient Roman goddess, typically depicted as a woman holding a set of scales in her left hand, a sword in her right, and blind folded. Not all ancient depictions take this form. This is actually an image of Justitas that has evolved over time. However, the way it has evolved and what it has evolved into tells us something of the institutions she represents.
The scales mean that we are going to consider all evidence – evidence for and evidence against a proposition. We are not going to make a decision by listening to only one side of the debate. Thus, we realize justice in a court of law where the prosecutor presents her evidence, but the defense has an opportunity to examine and respond to all of it.
President Bush’s military tribunals are inherently unjust because they allow prosecutors to present secret evidence. This is evidence that the defendant is not permitted to see or to respond to. As such, it is the equivalent to putting a weight on one side of the scale, without allowing even the opportunity of considering what weight might be put on the scales against it.
Another feature captured with the scales is the idea that the decision – on which side the greater weight rests – is not determined by the whim of the judge. It is determined by an outside source, something that does not care which side ends up having the greater weight. We capture this element in a system of justice by having the decision be made by an impartial judge and a jury of one’s peers. The judge, being an employee of the state, is not even considered sufficiently unbiased in many cases to make a just decision. The decision is handed over instead to a group of citizens.
The other symbolic representation that we find associated with the statue of Justitas is the blind fold. This represents recognition of the fact that there are certain things that will tend to sway our opinion, but we must work hard to establish institutions that help us to avoid that weakness. We need to make sure that justice is blind to irrelevant facts that might otherwise arouse the passions – facts about race, gender, personal characteristics that are not relevant to an individual’s guilt or innocence (e.g., homosexuality, where homosexuality is not relevant to whether an individual held up a convenience store or not).
It may seem unfair to have a trial where the jury is simply not permitted to see certain pieces of information. It would seem that all information is relevant to a case. However, we know that there are many types of information that may prejudice a jury, causing jurists to reach conclusions that are not justified by the evidence. Before bringing evidence into the court the judge has the power to rule on relevance. There is simply no need to waste time on data that is not relevant, or to risk that it might prejudice the opinion of a juror who mistakenly sees relevance where none exists.
These are some elements in what we find to be a standard system of retributive justice. How is it that these things come to have value? More specifically, how is it that they come to have moral value?
They come to have moral value in the way that all things have moral value; they are things that a person with good desires would love, where good desires are desires that tend to fulfill other desires. A good person values a fair trial – a trial in which the defendant has the opportunity to tell his side of the story, where the verdict is rendered by an impartial jury, and with a procedure that makes sure to confine the case to relevant evidence. A good person would insist on a trial like this because such a trial is most likely to fulfill the more and the stronger of all desires.
Here, I want to bring into the discussion the difference between rule and desire utilitarianism. The rule utilitarian would have us take these principles of justice as a set of rules that, if followed, would tend to maximize utility. The rules have no value in their own right. They do not identify principles of intrinsic worth. They are rules that we adopt merely because they are useful
They are rules that we can throw out the instant they cease to be useful. The instant a political leader deems it useful to throw out the concepts of a fair trial, he may do so. The rules, after all, exist only to serve the public good, and can be tossed as soon as one believes that tossing them will serve the public good.
However, to the desire utilitarian, these are not just rules. When a principle of justice becomes the object of a desire – of a passion – then it is no longer merely a means to some end. The rules become ends in themselves. They become an object of passion such that, when we measure the utility of a mere rule, its utility is measured by its ability to bring about trials in which the accused is able to confront the evidence against him, the trial is heard by an impartial jury, the accused is presumed innocent, and the burden of proof is on those who would inflict harm rather than on those who would be harmed.
Correspondingly, when the principles of justice have become objects of desire, then the agent will view those things that threaten these principles as he would those things that would cause him personal pain or do harm to a person that he loves. In fact, there is an important similarity between the love that the agent may have for his child and the love he might have for one of these principles. In both cases, he protects the principle or the child not merely because the principle or child is useful, but because the principle or child is something he wants to protect and defend.
Desire utilitarians do not ask whether the principle itself is useful in this or that instance. The desire utilitarian asks about the usefulness of the love for the principle. ‘Justice’ itself are those principles, to be used in determining guilt or innocence and the appropriate levels of punishment, that people generally have reason to encourage their neighbors, not only to follow, but to love, and to protect, and to nurture.
These are the questions that make sense from a desire utilitarian perspective.
Correspondingly, when the principles of justice have become objects of desire, then the agent will view those things that threaten these principles as he would those things that would cause him personal pain or do harm to a person that he loves.
ReplyDeleteDesire utilitarianism seems to me far too subjective to be of any practical use as a theory of morality. Exactly whose desires are we meant to follow, and when are those desires to be deemed "good", in some objective sense?
A good person values a fair trial – a trial in which the defendant has the opportunity to tell his side of the story, where the verdict is rendered by an impartial jury, and with a procedure that makes sure to confine the case to relevant evidence. A good person would insist on a trial like this because such a trial is most likely to fulfill the more and the stronger of all desires.
No, a good person would insist on trial like this simply because she might one day be on the receiving end of it.
Alternatively, if we were all represented by agents who knew nothing about our specific circumstances (John Rawls' "veil of ignorance"), such is the only system they would agree on, for they also would not know on which side of the courtroom their principals might find themselves.
So it's back to the Golden Rule, I'm afraid!
Theo - I'll try to summarize briefly, to see if I still have a firm grasp of DU. (It's said you don't really KNOW something until you can teach/explain it to someone else)
ReplyDeleteA) Any intentional action is motivated by reasons for action. If there was no reason for action, no intentional action would be taken.
B) Desires are the only reasons for action that exist.
C) Some actions either directly or indirectly thwart the desires of others.
D) These people have a reason for action to prevent desires which would lead to those actions.
E) Some actions either directly or indirectly help to fullfill the desires of others.
F) Those people have a reason for action to promote desires which would lead to these actions.
1 - A desire that people in general have many strong reasons to discourage or eliminate is defined as "bad". Note that we don't have to use the word "bad", we could use the word "blue" or "heavy", but fact still exists that this is a desire that everyone has good reasons to discourage.
2 - A desire that people in general have many strong reasons to encourage is defined as good. Again, the exact word used doesn't matter.
I believe this answers both the "whose desires" and "when are they good" questions. And it answered them using only objective statements of fact, no subjectivity involved. Let me know if you disagree. (Incidentally, any of these objective statements could be wrong, much like the statement "This rock weighs 10lbs" could be wrong. However they are not sujective, and are testable, and hopefully overtime incorrect statements can be discovered and corrected to refine the theory)
In the interest of completeness, I'll also point you to previous blog entries of Alonzo's.
Hateful Craig (my favorite)
http://atheistethicist.blogspot.com/2006/12/hateful-craig-problem.html
Who Gets To Decide
http://atheistethicist.blogspot.com/2007/10/who-gets-to-decide.html
Harmony of Desires
http://atheistethicist.blogspot.com/2007/10/harmony-of-desires.html
Why Are Desires So Important
http://atheistethicist.blogspot.com/2007/10/why-are-desires-so-important.html
Thanks for the summary and the bouquet of links, Eneas. I am still catching up on Alonzo's writings, but I do have a few preliminary comments and questions, all of which have probably been said before, so I hope you'll both indulge me.
ReplyDeleteA) Any intentional action is motivated by reasons for action. If there was no reason for action, no intentional action would be taken.
Not true. If it was, then an agent should always be able to provide the reason for her (intentional) action, and this is not the case. We all do things from time to time for no reason at all.
B) Desires are the only reasons for action that exist.
Not true. Rationality can overcome our desires. We can rationally decide not to pursue our desires, even if such actions would not have thwarted the desires of others.
C) Some actions either directly or indirectly thwart the desires of others.
Not true. Nothing I can do, short of chemical intervention, can thwart the desires (or thoughts, or emotions) of someone else. I can, however, thwart their actions in response to those desires (or thoughts, or emotions).
D) These people have a reason for action to prevent desires which would lead to those actions.
E) Some actions either directly or indirectly help to fulfill the desires of others.
Same objection as in (C).
F) Those people have a reason for action to promote desires which would lead to these actions.
1 - A desire that people in general have many strong reasons to discourage or eliminate is defined as "bad".
2 - A desire that people in general have many strong reasons to encourage is defined as good.
And herein lies the problem, it seems to me, with Desire Utilitarianism (and Utilitarianism in general).
Firstly, there is the practical problem of finding out how widely held a certain desire is, or what reasons people may have for promoting or thwarting it. Do we call a referendum? Conduct an internet poll?
Secondly, there is the important question of why the "majority" view should be followed? If the majority of people have good reasons for desiring the death penalty for capital crimes (say), does that make it right? Surely not.
Yes, desires had their role to play in our evolutionary past (and still do play a role amongst the "lower" animals), but haven't we, as human beings, evolved beyond this? Do we not now have to provide rational reasons for why actions should be deemed "good" (encouraged), "bad" (discouraged / outlawed) or "undecided" (allowed)?
Theo
ReplyDelete(A) There is a wealth of scientific research that shows that we are not always consciously aware of the reasons for our own action. In fact, even when we look at our own actions, the best we can offer is a theory as to what beliefs and desires motivated us - a theory that can sometimes be proved wrong. Nature has good reason for us to evolve an ability to sense the outside world relatively accurately, but absolutely no reason to give us an evolved 'inner sense' of our own reasons for action. Such a sense does not exist. The existence of a reason for action does not imply a conscious awareness of it.
(B) "Desires are the only reasons for action that exist" is a slogan. In fact, intentional action requires the interplay of beliefs and desires. Of these, desires identify the ends of intentional action, and reason identifies the means. So, the above phrase can more clearly be stated that "Desires provide the only reasons-as-ends that exist." But, if I write it this way, I then have to go into a long explanation as to what reasons-as-ends are. The way I phrase it tends to be well enough understood in most cases.
However, rationality gives us no power to overcome our desires. If you think that it does, then please provide me with an example of how this can happen. Please explain how reason alone can generate a conclusion that can motivate an action contrary to what an agent cares about. Whatever conclusion that reason alone can reach, it implies nothing about how the agent will act, unless the agent also cares about that particular conclusion.
(C) Within desire utilitarianism, to 'thwart' a desire has a very technical meaning. Desires are propositional attitudes. If an agent has a desire that P (for some proposition P), then that desire is fulfilled in any states of affairs S in which P is true, and thwarted in any state of affairs S in which P is false. Any time an action has the capacity to make a proposition P true, or false, that is the object of a desire, then that action has the capacity to fulfill or thwart that desire.
(D) and (E) Same response as to (C)
(F.1) Some moral questions are easy to answer. People do not find it all that difficult to understand the value in promoting an aversion to wonton killing, rape, theft, dishonesty, and the like. Some moral questions are difficult to answer. Desire utilitarianism respects the fact that some moral questions are difficult to answer and are likely to generate intense debate. This is not a criticism, this is a strength - that desire utilitarianism can account for moral disagreement. The way we resolve moral questions is by debating the available evidence as to whether there are in fact reasons to promote a particular desire or aversion. Debates in ethics are no more to be resolved by referrendum or an internet poll than debates in science.
(F.2) Desire utilitarianism does not pay any attention to majority rule. It looks at the more and stronger reasons for action (desires). To determine the value of capital punishment, you determine if a person with desires that tend to fulfill other desires is somebody who would favor capital punishment. There is a right answer to this question - and the majority can very well be mistaken as to what that right answer is.
(F.3) Reason alone cannot provide us with ends - only with means. There is simply no such thing as a rationality of ends. Any statement that a moral position can be supported by a rationality of ends is as false as a statement that a moral position can be supported by divine command.
Theo,
ReplyDeleteMy turn to try to explain. I have very little training in philosophy, ethics, or psychology, so I hope that where I fail, Alonzon, Eneasz, or others can correct me or expand on the explanation.
A) Not true. If it was, then an agent should always be able to provide the reason for her (intentional) action, and this is not the case. We all do things from time to time for no reason at all.
We may do things for no reason that we are aware of, but we still have a motivation. For example, if I don't care what we want for breakfast and pour myself a bowl of cereal that is not my favorite, I may be motivated by a desire not to waste time thinking about breakfast when I could be pondering the deeper motivations of a blog poster. If I run on autopilot on my way out the door, it may be that I have made all the decisions necessary long ago (what to wear on a certain day of the week in certain weather, when to leave to catch the bus, etc.) and am motivated by a desire for efficiency, to not reassess my actions as long as they work at an acceptable level when I could spend that effort on something else (or just forego any effort at all).
I'm trying to think of a counterexample that is not motivated by efficiency/laziness. But I am sure there will be a motivation involved. I pour breakfast, despite disinterest, because I still desire to eat to stay alive and operate at maximum productivity (or with least discomfort). A person does not do anything that they do not want to do. A great example Alonzo sometimes uses of the person who does things they don't want to do is a form of perfect, distant mind cotrol. If someone is controlling my mind and body to perform actions, then it is the desires of that person that cause my actions, and not my own desires.
B) Not true. Rationality can overcome our desires. We can rationally decide not to pursue our desires, even if such actions would not have thwarted the desires of others.
Even in the case of rationality, it is the fact that there are desires involved. We may want to eat chocolate, but think it will have long-term bad effects on our health. In this case, there are two desires: the desire for chocolate, and the desire for long-term health. Rationality would dictate we choose health, as that will fulfill the more and stronger of our current desires. At some point, logic is dependant on desires to make all the decisions it makes. If a person is unmotivated by a desire for long-term health, does not care about the personal and social costs and discomfort, the person will make a perfectly rational decision to eat chocolate to the point of nausea.
C) Some actions either directly or indirectly thwart the desires of others.
Not true. Nothing I can do, short of chemical intervention, can thwart the desires (or thoughts, or emotions) of someone else. I can, however, thwart their actions in response to those desires (or thoughts, or emotions).
This seems like a good point. It is not the desires themselves that are being thwarted, but actions and results that may lead to those desires being fulfilled.
However, one major issue of desire utilitarianism is that our environment, and specifically the environment that other people provide, can influence our desires. This happens through the use of praise, reward, condemnation, and punishment. Say I have a new ethics theory that I've been mulling over, and I mention it to my friends one day. Whether I contiue along those lines depends a lot on whether my friends call me an idiot for thinking like that, or suggest I should write a book so the whole world could come to understand it. A lot of the legal system is based on the reward and punishment aspect. Far subtler but more important is the praise and condemnation aspect. Even more important - this can be tested in psychology labs to find the most effective ways of using praise and condemnation, or tested as a hypothesis itself to see if praise and condemnation really work to modify actions, desires, both, or neither.
...
I have no answer to the practical issue of finding out exactly what is worthy of praise, reward, condemnation, and punishment, other than to suggest it can be scientifically tested (create a hypothesis and get a bunch of psychologists, sociologists, economists, etc. to come up with a repeatable test for the accumulation and interpretation of data) and it will take a lot of time and resources. And, to hope that we've come a long way. This hope is also testable, if we could judge the numbers of desires now that are fulfilled compared with the numbers of desires a thousand years ago, in statistically meaningful terms.
Secondly, there is the important question of why the "majority" view should be followed? If the majority of people have good reasons for desiring the death penalty for capital crimes (say), does that make it right? Surely not.
It is not necessarily a majority view, but the more and stronger of all desires that should make the decision. To quantify it (something that is very difficult to do in real-world terms, but may be useful as an illustration) let us take the death penalty. I am inventing a term, desire-strength, to refer to a quantity of desires multiplied by how strongly they want each desire. This is expressed as a proportion of each person on the assumption that each person's total desire-strength are equal to every other person's. Say, for example, that 80% of people have 51% desire-strength that would be fulfilled by the death penalty. 10% are exactly equal in deires fulfilled and unfulfilled, and 10% have 90% desire-strength that would reamain unfulfilled by the death penalty. In this case, the majority who desire the death penalty only desire it a little, while the minority desire NO death penalty by a lot.
Incidentally, studies may not be able to give accurate numbers to fill in my made-up numbers, but they can come a lot closer.
Yes, desires had their role to play in our evolutionary past (and still do play a role amongst the "lower" animals), but haven't we, as human beings, evolved beyond this? Do we not now have to provide rational reasons for why actions should be deemed "good" (encouraged), "bad" (discouraged / outlawed) or "undecided" (allowed)?
As I understand it, adherents of DU do not perceive the "allowed" category as "undecided," but rather as a separate category of its own. For example, hobbies. It is not the case that all people should be forced to play with train sets; neither is it the case that they should be prevented. We are not undecided as to the morality of playing with train sets. It is permitted but not mandatory.
No, we have not evolved beyond desires. We still do what we want to do and only what we want to do, in the sense that if we do not have a stronger desire to do a thing (say, to speak when being tortured) than to not do it, we would not do it, but instead do that which we have the strongest desire, or the most desires, or more stronger desires, for.
I took up a lot more space than I intended to here, but I hope it is useful to someone, if only as an example (good, for preference).
Regarding emu sam's last point.
ReplyDeleteThere are three categories of moral action; obligatory, prohibited, and permissible (neither obligatory nor prohibited).
My standard example of permissibility is:
It would be a bad thing if everybody were an engineer. It would also be a bad thing if nobody were an engineer. Being an engineer is something that we want some people to do, but not everybody. Engineering, then, is 'permissible' - neither obligatory (everybody does it) or prohibited (nobody does it).
Thanks for taking the time to respond, Alonzo and Emu Sam.
ReplyDelete...rationality gives us no power to overcome our desires. If you think that it does, then please provide me with an example of how this can happen. Please explain how reason alone can generate a conclusion that can motivate an action contrary to what an agent cares about.
Well, an agent might care deeply about an insurmountable social issue (the fate of hungry children in poor countries, say), and therefore desire for their fate to be improved. Moreover, the whole world might agree with her, thereby making that desire a "good" one (if I understand DU correctly), and hence an action to be promoted or encouraged.
Through reason alone, however, she (and the rest of the world) might come to understand the complexities of the issue and hence take no further action in trying to fulfill that desire (i.e. abandon the cause).
How might that decision have been driven by desire and not by reason?
The way we resolve moral questions is by debating the available evidence as to whether there are in fact reasons to promote a particular desire or aversion. Debates in ethics are no more to be resolved by referendum or an internet poll than debates in science.
Except that DU aims to identify those "desires that people in general have many strong reasons to encourage" (to use Eneas' words). As Emu Sam also pointed out (I think!), we need some form of "DU calculus" to separate the "good" desires from the "bad" (and both from the "permissible"). It may be possible in theory, but in practice this is simply not feasible. Commendable as it might be as a descriptive moral theory (by which I'm not saying that it is!), DU unfortunately fails as a candidate for a prescriptive practical system of morality.
In your example, the agent might have other desires, such as the desire to be effective, or the desire to do the most good she can with the resources at her disposal. If ALL she cares about is to help the children, she might well go headfirst into helping those children at all costs, even to the point of neglecting her own health, stealing from the rich to give to the poor, etc. She has no care but to help the children, and therefore does not care who else is hurt in the process, or that it is an impossible task.
ReplyDeleteI do like the idea of DU calculus, but that is because that if you can quantify everything and reduce it to equations or inequalities, you have greater certainty that the answer is correct. I agree that quantifying desires is not feasible in practice. However, my fondness for absolutes should not be seen as a problem with DU.
The unfeasibility of gathering the necessary data to make these decisions in a short period of time is a problem with prescribing action that needs to be taken immediately. Thus, we must estimate the data as accurately as possible given time and resource constraints, and accept that we may be wrong, and try to correct our mistakes. Furthermore, we should look for these errors after the fact and try to come up with ways of making a better estimate in similar circumstances.
(I'm arguing for the numbers again there. Alonzo, does DU advocate such numbers and equations?)
I think, by Alonzo's most recent comment, the permissible desires are good desires in DU. We have reason to praise and reward engineers (by giving them a salary, for example). I still like my neutral-permissible category. Perhaps this is a fourth category?
Hello Theo! Sorry for the long delay in reply, I generally don't read blogs over the weekends, and catch up on Mondays.
ReplyDeleteFirst I have to make the disclaimer that I am a layman, with no formal training in philosophy, ethics, etc, merely a strong interest in them. As such my understanding is incomplete at best and you should always take Alonzo's explanations over mine. I'm still trying to figure this all out myself. :)
I don't think discovering which desires will generally lead to the greater fulfillment of desires in general (and conversly, which will genrally lead to greater thwarting of desires in general) is as insurmountable a problem as you portray it. Obviously it would take a great deal of effort, time, and resources to work out the sticky details. On the other hand, there's also a lot we already know.
The problem I have with ethical theories in general is the same problem I have with ancient "natural philosophers" - they are based on nothing. Philosophers spent centuries debating the nature of reality. What makes heavy things fall and light things rise? What are things made of? What makes something real? From all this empty talk we got atomism (which was close to the truth only through luck), platonic ideals, the Logos, the Four Elements, and all sorts of silliness. And the reason we got some much useless conjecture was because these philosophers were unwilling to roll up their sleeves and actually experiment with the world, instead of just dreaming things up and arguing for the heck of it. The only field they had much success with was Mathematics and Logic, since you don't need much interaction with the real world to refine a formal system.
Thankfully, science in general has moved beyond that point, and now deals with what can be observed, tested, etc. But morality was held back. In the realm of ethics, generally people are still sitting around debating empty ideals.
DU is the only ethical theory I've found so far which finally moves ethics forward into the realm of observable, testible, modifiable theory. And yes, it will take a LOT of work. How many centuries have we now been expirementing and refining science? Since the Enlightenment, I would argue. How many thousands of humans have dedicated decades of their lives to their fields, how many millions of dollars have been spent on research? Ethics is finally entering this arena, and there is a LOT of catching up to do. It will be decades before ethics even comes close to the refinement of the other sciences. But simply because this is daunting doesn't mean we should fall back to meaningless argument over insubstantial hunches. Any scientist who ever said "This experimenting is hard, I'm going to go back to sitting around the monestary and just thinking deep thoughts" would be betraying his profession.
I know I've gone on too long, but a quick point on descriptive vs prescriptive - they aren't two different things. A descriptive theory of physics tells you how things move. It doesn't say how things should move. If you introduce a goal ("I want to get to the moon!") you can use this theory to prescribe actions to take in order to accomplish this goal. An ethical theory is the same way. It describes how people behave, and why. It doesn't say how people should behave. Unless you introduce a goal (such as "I would like to make the world a better place for future generations"), in which case you can use the theory to prescribe ways to act in order to accomplish this goal.
Thanks Eneas! Hope you had a most relaxing weekend.
ReplyDeleteI don't think discovering which desires will generally lead to the greater fulfillment of desires in general (and conversely, which will generally lead to greater thwarting of desires in general) is as insurmountable a problem as you portray it.
Perhaps not, but until someone does this, we won't know, will we? So what do we do in the mean time? ;-)
Even if it can be done, at least in theory, I would still question the rationale for embarking on this course of analysis.
Let's assume, just for the sake of argument, that we've discovered and categorised all possible desires and their associated intentional actions (am I phrasing this correctly?). The results are widely published, peer-reviewed, replicated and confirmed, amidst loud huzzah. The world now has a list of "good" desires and actions (let's call it "The Good Book"), well-understood and widely accepted for its practical ability to lead to the greater fulfillment of desires in general.
...
...
So what?
Why, do you think, would people start living according to the practical wisdom of The Good Book? Alternatively, why should people, in general, live their lives as such?
I shall keenly await your answer!
Just a follow-up on "descriptive" vs "prescriptive". You wrote:
An ethical theory is the same way. It describes how people behave, and why.
This definition is far too narrow, if not downright incorrect. Describing how and why people behave the way they do is good and proper science, but it's not Ethics. At least not the important part, which is to produce a satisfactory answer for how people should behave.
It was a pretty good weekend! Had a lot of fun Friday and Saturday, and got a lot of my stuff packed on Sunday (I'm moving later this week).
ReplyDeleteYou've come across my principle reason for not accepting DU when I originally came across it. The answer (and what finally brought me over) is basically The Hatefull Craig Problem post, in it's entirety. However I will summarize, as well as quote a wee bit of it.
"Should" is often used rather loosly, so we should spend a second to nail it down. I assume that when you say someone "should" do something, you mean they have reasons to do that thing. It would, after all, be pretty non-sensicle to both say that someone should do something, and at the same time say he has no reasons to do that thing.
In addition, you probably mean that the sum of his reasons for doing that thing are greater than the sum of his reasons for not doing it (or doing something else). If he had more reasons to do something else in place of this action, then it follows that he should do that action instead.
Desires are the only reasons for action that exist (well, with the elaboration given by Alonzo above). Therefore to say that someone should do something is to say that they have more and stronger desires that would be fulfilled by doing that thing than by following an alternate course of action.
You can probably already see the problem that this may not be true in all cases, the person may have malicious desires, and therefore your should statement would be false. However the rest of us would have reasons to change his desires so that he is no longer dangerous to the people around him.
On a broader scale, all people in general have many reasons for action to promote in their society desires that tend to fulfill their own desires, and discourage desires that tend to thwart their desires.
Hopefully that all made sense.
A few quotes from the post:
I think that this false assumption represents the most serious problem people have with this theory. They assume that I am claiming to have discovered an argument that will inevitably cause the listener to do what this argument says he ‘should’ do. They shake their heads and think, “Poor, Alonzo. He just can’t see how stupid it is to say that we can come up with an argument that will invariably convince people to do the right thing.”
Ironically, the thing that I am often challenged to do, by people who assume that I have claimed to be able to do, is something that I say cannot be done – by me or anybody – which is to change desires by reason alone or, through reason alone, convince somebody to perform an action that does not aim to fulfill the more and the stronger of his desires given his beliefs.
They continue to think this, no matter how many times I insist that a person with a particular set of desires will act so as to fulfill the more and the stronger of those desires given their beliefs and the only thing reason alone can teach him is how to be more successful at fulfilling those desires.
What I am going to answer is, “The claim that you should have these desires that fulfill the desires of others is the claim that people generally have reason to use the tool of praise on those who exhibit such desires, and the tool of condemnation and, in the worst cases, punishment, on those who act are hateful. It would be nice to be able to get you to want to fulfill the desires of others through reason alone. Unfortunately, reason cannot be used for that particular job – it is ineffective. But reason tells us what tools we can use; praise and condemnation. The claim that you should have desires that fulfill the desires of others is the claim that reason tells us to use the tools of praise and condemnation to help bring it about that people generally have such desires.”
Hi guys an interesting discussion. Let me interject and add to where you are currently at
ReplyDelete@Eneasz:I don't think discovering which desires will generally lead to the greater fulfillment of desires in general (and conversely, which will generally lead to greater thwarting of desires in general) is as insurmountable a problem as you portray it.
@Theo:Perhaps not, but until someone does this, we won't know, will we? So what do we do in the mean time? ;-)
Let me add the standard philosophical distinction between a criterion of rightness and a real-time decision procedure - hopefully this is self-explanatory.This leads to two questions.
The first is DU as a criterion of rightness. Is it sufficient or can you suggest a better one? (There are plenty of worse ones , of course, based on falsehoods and fictions).
If you grant that DU is at least a sufficient criterion of rightness then the second question one can reasonably ask how it can work in practice s a decision procedure. DU I think provides three answers:-
(A)The first is that DY advocates the encouragement of good desires and the discouragement of bad desires the outcome of this is that people so influenced do not want to fulfill bad desires and thwart good desires. This typically involves common generalizations such as aversions to theft, murder and so on.
(B) In specific instances where one has time to and needs to deliberate one can use DU to help identify what are the effects of different desires. We don't have time to do this always and rely on (A) mostly and when we do do this a good person would try their best to evaluate desires given the time and data they have. Is this likely to be perfect or without error? No. However any other good person would like come to the same decision in within the constraints. Others who do not care or reason based on false and fictional concepts are less likely to arrive at the tentative best decision.
(C) There are difficult moral questions which do not need an immediate decision but do require considerable examination and discussion. Here the question is to whether DU is a suitable and sufficient framework for such an analysis or do you have something better.
@Theo:Even if it can be done, at least in theory, I would still question the rationale for embarking on this course of analysis.
...
Why, do you think, would people start living according to the practical wisdom of The Good Book? Alternatively, why should people, in general, live their lives as such?
The Good book is a misleading idea. Science is provisional, challengeable and revisable based on successful challenges. There is no difference here. There is no final absolute answer.
Further read up on Alonzo's Hateful Craig post regarding motivation to follow DU.
@Theo: Describing how and why people behave the way they do is good and proper science, but it's not Ethics. At least not the important part, which is to produce a satisfactory answer for how people should behave.
And should and oughts are reasons-for-action. DU describes how these work. If anyone does not want to live in a better world then may act against these and expect to get praise, reward, condemnation and punishment by those who do.