RichardW writes:
It's no use asking me to supply a definition for you. The main reason I am a moral anti-realist (essentially a non-cognitivist) is because it seems impossible to define moral terms (without using other moral terms).
If you refuse to choose a definition then how are we supposed to engage in any type of discussion? We can't even reduce ourselves to grunts and whistles without, first, assigning meaning to grunts and whistles.
As it turns out, I am an anti-realist too. And I am a non-cognitivist. I am also a realist and a cognitivist.
How is it the case that I can be all of these things? Don't they contradict each other?
No, they do not. What people take to be different moral theories is, in fact, different moral languages. Realism and anti-realism no more contradict each other than Einstein’s theory of relativity in German contradicts Einstein’s theory of relativity in Chinese
They appear to contradict each other because both theories use the same terms. So, "Moral properties do not exist" in one language appears to contradict “Moral properties do exist” in another language. However, they are different language (as opposed to different theories) precisely because they use two different meanings of the term "Moral properties".
Here's the argument as I see it.
Person 1: Choose your definition of 'morality'.
Person 2: (Chooses a definition).
Person 1: Now, defend that definition as the correct definition.
Person 2: I cannot.
Person 1: Then we can throw out that definition of morality.
I take this to be logically equivalent to the following.
Person 1: Choose your language for expressing Einstein’s theory of relativity.
Person 2: I choose German
Person 1: Now, defend that language as the correct language for Einstein's theory of relativity.
Person 2: I cannot.
Person 1: Then we can throw out Einstein's theory of relativity.
Throw out moral terms, if that is what pleases you. I do not need it. Throw out every moral term – just cross it out of the dictionary and resolve never to use any of them ever again. Let us speak the language of moral anti-realist non-cognitivism.
Desires exist. Desires are reasons for action. People still act so as to fulfill the most and strongest of their own desires. Some desires are still malleable – they can be created or destroyed, strengthened or weakened, by social forces such as praise, blame, reward, and punishment. People still have “reasons for action that exist” (desires that they seek to fulfill) for promoting desires that tend to fulfill other desires. We have a whole family of propositions here that are objectively true or false.
Only, because we have adopted the language of moral anti-realist non-cognitivism we are not reducing any of these statements to moral terms.
Or, we can choose a different language. Here's a suggestion:
"Good1" means "Is such as to fulfill the desires in question." That is to say, to call a state of affairs S "Good" is to say that there is a desire that P and P is true in S. It follows from this that those people who have a desire that P also have a "reason for action" to realize S.
They may also have reasons for action that realize not-S. That is, they may have a desire that Q, where Q is false in S. That desire that Q may be stronger than the desire that P. In this case, the agent has more and stronger reason to realize not-S than S. Let us use the term "Good2" to refer to states of affairs according to how well they fulfill all of an agent's desires. "Good2" means "Is such as to fulfill the most and strongest of an agent's desires." Notice that Good2 is just a species of Good1 where "The desires in question" are all of an agent's desires.
Now, we have an agent with a desire that P and a desire that Q. Now, if there is a state of affairs S' where P is true and S and Q is true in S, then the agent has reason to realize S'. He has more and stronger reason to realize S' than to realize S. So, in this case, let us say that S’ is "better1 than" S. "Better1 than 1" means "Fulfills the more and stronger of the desires of the agent."
Now, let us introduce a second agent. Agent2's desires are malleable, meaning that Agent1 has the power to choose Agent2's desires. Agent1, recall, has the most and strongest reasons for action to realize S' (since it fulfills both his desire that P and desire that Q). Agent1 can give Agent2 a desire that R1. R is true in S', so this means that Agent2 will have reason to realize S'. Or Agent1 can give Agent2 a desire that R2. R2 is true in not-S', so this means Agent2 will have reason to realize not-S'. Agent1 has more and stronger reason to give Agent2 a desire that R1.
Now, let us define Good3 in such a way that we will only use this definition of “Good” when we are evaluating desires. A desire is Good3 if it is such that it tends to fulfill the most and strongest of other desires. That is, it will tend to motivate the agent to act in such a way so as to make or keep true the propositions that are the objects of other desires. In this case, R1 is good3.
We can also add Good4 by the way. Good4 will only be used to evaluate intentional actions. A Good4 action is an action that a person with Good3 desires would have performed. That is to say, a Good4 action is an action that would fulfill the most and strongest desires of a person with Good3 desires.
Since all of these numbers are confusing, let us say that Good1 = Good in the generic sense. Good2 = Practical goodness. Good3 = Virtue. Good4 = Obligation,
What I have done here is invented a language. Furthermore, I would claim that this language is so close to common English that most native English speakers (particularly those who have not committed themselves to speaking a different moral language but who take English moral terms "as is") would be hard pressed to tell the difference.
Now, the challenge is, "Can I defend this language as being the correct language?"
Answer: I cannot.
But, see the previous section. Choosing between moral languages is no different than choosing between Einstein’s theory of relativity in German or Einstein’s theory of relativity in Chinese.
It would be useful if we all spoke the same language. It would certainly make communication more efficient. However, whether that language should be English, Spanish, Chinese, or some other language is not a subject that interests me. As for me, I speak English, and that is the language I will continue to write in. If you prefer a different language, I will leave it to you to do the translation.
It would be useful if we all spoke the same moral language. It would certainly make communication more efficient. However, whether that language should be cognitivist, non-cognitivist, realist, or anti-realist is not a subject that interests me. I speak cognitivist realist. If you read a different language, I will leave it to you to do the translation.
Though, actually, I do speak non-cognitivist anti-realist and can do the translation myself if I have enough reason to do so. I can't say the same about Chinese.
Fascinating --- this two language thing is intriguing me. Well put.
ReplyDeleteThis is off subject, but
May I ask what you do for a living?
Alonzo, you seem to have a poor grasp of the principles of rational argument. You are the one making the positive claim. You claim you have derived an "ought" from an "is", i.e. you have derived an objectively true moral proposition from non-moral facts. [If I've misunderstood, and that is not your claim, please say so.] The onus is therefore on you to define the moral terms used in your moral proposition. Without such a definition you cannot possibly justify your claim.
ReplyDeleteYou have also made the secondary claim that your moral terms are "substantially consistent with how people actually use the terms". This claim needs to be true, otherwise you would not have derived a moral proposition but something else masquerading as a moral proposition.
I'm simply pointing out that (a) you haven't given a clear definition, and (b) insofar as I can make any sense of it, your definition makes moral terms inconsistent with how people actually use them. I don't have to give my own definition in order to show that your definition is inadequate to support your claims.
I strongly recommend that you keep trying to define what your moral terms mean. Without such a definition, your whole moral philosophy is a house built on quicksand.
P.S. You wrote: "Furthermore, I would claim that this language is so close to common English that most native English speakers... would be hard pressed to tell the difference."
ReplyDeleteThat's the claim I'm arguing against.
In our previous discussion, you defined "ought" to mean "there are reasons for action that exist". I pointed out that this appears to make the proposition "people ought not to commit murder" equivalent to "there are reasons for action that exist for people not to commit murder". Try asking some people if they consider those to be equivalent. (They will probably reply that they can't say because they don't even understand what the second proposition means.)
P.P.S. Some people might agree they are equivalent because of an ambiguity. The expression "there are reasons to do X" can mean either "people have reasons for doing X" or "there are reasons why people should do X". If you make it clear that you mean the former (as you implied at http://atheistethicist.blogspot.com/2009/03/defining-reasons-for-action.html), I doubt that anyone will agree the two propositions are equivalent.
ReplyDeleteBefore you can even begin to substantiate your claim, you need to come up with a clear unambiguous definition.
I've been at this with you and Luke for nearly a month now, trying to get a clear definition out of you. I can't believe I've wasted so much time on it. This will be my last post here unless I see a clearer definition from you.
RichardW,
ReplyDeleteAlonzo has given very clear definitions of what he means by moral terms. I don't know why you think he hasn't.
It sounds like you're getting frustrated, but understand that it's kind of frustrating for me to give you specific, word-for-word definitions for many moral terms and then have you tell me I have not defined my moral terms.
I'm with Luke on this. Many, many times we've said "Good" = "tends to fulfill more and stronger desires of everyone". Why you keep claiming that no definition has been provided is bafflling.
ReplyDeleteinsofar as I can make any sense of it, your definition makes moral terms inconsistent with how people actually use themNo, that is misleading. The DU definitions are inconsistent with most people's *definitions*. This is almost trivially true, because most people have never heard of DU. So they define good as "pleasing to God" or "gives the greatest utility" or something else. That is why they would disagree that they are equivalent.
However when you observe the *usage* of the terms, they are substantially identical. People act to promote desires that tend to fulfill other desires. They act to inhibit desires that tend to thwart other desires. When judging guilt, they evaluate the desires of the agent and not just the act. They allow for excuses that demonstrate that the act was not the result of the desires of the agent. They praise examples of good desires, and deride examples of bad desires. Often they will lash out at those who exhibit extremely bad desires. And in all this they use the terms "good", "bad", "obligation", etc that is fully consistant with how DU defines them, even if they don't know that definition themselves. Gravity does not need to know about Einstein's theory to work. Populations don't need to understand Darwin's theory to evolve. And people don't need to know of, or agree with, Desire Utilitarianism to be accurately described by it's proposed mechanisms.
I know I said I wouldn't post again, but--more fool me--I just can't let Luke's post go unchallenged.
ReplyDeleteLuke, let me remind you (yet again) of our previous dialogue...
- I challenged you to explain what you mean by an objective moral statement like "Murder is wrong".
- You responded that "Murder is wrong" means "There are reasons for action that exist to not murder".
- Since this seems ambiguous to me, I asked you to clarify whether you meant (1) there are reasons why people _do_ not commit murder, (2) there are reasons why people _should_ not commit murder, or (3) something else.
- Instead of responding directly, you linked to a post by Alonzo which
implicitly accepted meaning (1).
I then continued on the basis that (1) was what you meant, and you didn't object until much later, in the other thread here, when you finally announced that "That is not quite what I meant". And you made no attempt whatsoever to clarify in what respect it was not what you meant.
Alonzo has gone no further than saying he equates "ought" with "there are reasons for action that exist". This is not even grammatically consistent. One can say "I ought", but one cannot say "I there are reasons for action that exist". I asked him for clarification, as follows:
>>> I take it this is your definition of "ought". So "people ought not to commit murder" means "there are reasons for action that exist for people not to commit murder". But what exactly does that mean? Does it mean "there are reasons why people do not commit murder", which is the interpetation I put to Luke. If not, what does it mean? <<<
He didn't even reply, but just asked me for a definition instead!
If that's giving clear definitions, then I'm the King of China. (Now, where did I put that crown?)
Dave - that was hilarious, thanks! :)
ReplyDeleteRichardW - is there any reason for you to try to strengthen your fellow citizens' aversion murder? Are there any reasons for you to disparage, condemn, shame, and punish those who do not show a strong enough aversion to killing? Are there any reasons for you to praise those who display a strong aversion to murder, and hold them up as examples for others?
That is what is meant by "good" and "bad", and "reasons that exist to not murder". Concerning your 1) vs 2) - these are both reasons why people should not AND do not commit murder.
RichardW
ReplyDeleteI think you are confusing motivational reasons and justificatory reasons.
A motivational reason is a reason to act that people have, these are the reasons why they did what they did (and not something else).
Justificatory reasons evaluates motivational reasons that people have (or lack) as in are these motivational reasons (or their lack) justified? Here we go from reasons that they have to reasons that exist - that they may not have.
The justification is determined by evaluating motivating reasons they have or lack, in other words desires which are evaluated in the extended rationality model that treats ends as means in terms of their effects on all other ends.
Once the justification (or lack) is established, this can serve as a basis for promoting or inhibiting motivational reasons (desires) that people have or lack, at the very least in the eyes of those affected by those aforesaid motivational reason that have been evaluated. This is the justification that they need to do this.
Note that this analysis is is
also itself a description of why those people employ such social persuasion to change those unjustified desires. They do not do this because of this analysis, rather this analysis shows why they are motivated promote or inhibit those unjustified desires.
This is why oughts are based on reasons for action that exist because they may be external to the person being addressed and it is through social persuasion that these justified desires are instilled or unjustified desires removed, if the persuasion is successful otherwise legal power might have to be employed.
The specific use of the moral terms shoulds, oughts, right and wrong etc. can be part of the expressive and emotional force in persuading people to have such desires and not others. However this is not a justification for the extended rationality evaluation itself to be biased by such emotional components, to be unbiased here means transcend such factors.
Faithlessgod, that's a very interesting post. First, thank you for drawing my attention to the terms "motivational" and "justificatory" reasons. I now see that some other sources use the terms "explanatory" and "justificatory" reasons, which I prefer, and I'll use those.
ReplyDeleteI'll assume the rest of your post is meant as a defence or explanation of DU, though that's not stated.
You wrote: "Note that this analysis is is also itself a description of why those people employ such social persuasion to change those unjustified desires. They do not do this because of this analysis, rather this analysis shows why they are motivated promote or inhibit those unjustified desires."
My only problem with this analysis is your use of the expression "unjustified desires". How can a desire be justified or unjustified? A desire is not the same as a reason. The existence of a desire is simply a fact of reality.
You wrote: "This is why oughts are based on reasons for action that exist because they may be external to the person being addressed..."
At this point you are talking about hypothetical oughts, aren't you? E.g. if you desire X, you ought to do Y to achieve it. I suppose we can omit the conditional if we are assuming knowledge of that desire as part of our background knowledge. But we are still talking about the means for a person to achieve his/her own ends. So there is no reason why I ought to take into account other people's desires except insofar as that will help fulfill my own desires. Therefore this does not justify DU's claim that everyone's desires should be taken into account (on an equal basis).
You wrote: "The specific use of the moral terms shoulds, oughts, right and wrong etc. can be part of the expressive and emotional force in persuading people to have such desires and not others. However this is not a justification for the extended rationality evaluation itself to be biased by such emotional components, to be unbiased here means transcend such factors."
OK, but the "extended rationality evaluation" does not lead us to DU. It just leads us to the rational pursuit of self-interest. And if that be accepted, your analysis tells us to each promote the moral claims that best achieve our own self-interest.
An alternative view of DU could be this. People who have a desire to see a certain sort of society (e.g. a fair one) may decide to adopt DU as a programme for achieving that end. They may then wrap up that programme in the language of moral claims in order to get everyone to buy into it. But I don't think that's the position Alonzo is taking.
RichardW, I think your basic question is, "How do you determine what is good, using desire utilitarianism?" And your basic complaint is that we are describing how to use desire utilitarianism to enforce what has already been determined to be good without adressing your actual question. Am I correct, or have I created a straw man?
ReplyDeleteI believe Alonzo's stance is that for some propositions ("Murder is wrong") we're already pretty sure the proposition is correct and there is no one trying to change our minds, so we can let it rest.
For other, more controversial positions (e.g. "We should let five thousand people out of jobs and pensions because their business used bad practices, even though we cannot afford the dent in the economy and we can find a means to save those jobs, those pensions, and the business"), where there is no established perception of good and many people trying to argue both sides, we would need to employ experts to determine which way ("The proposition is false" or "The proposition is true") is the more moral way. Desire utilitarianism will argue that the way that fills the more and stronger of all desires - including future desires - is the "should" that we should adopt. In order to find out which is the better position, DU recommends listening to those who know most about the subject - economists in this case - and taking their advice.
DU also argues that the way that fulfills the more and stronger of the current desires of those making the decision (adjusted accordingly for those with more power in the decision-making process) is the should that we WILL adopt. Therefore we as a group have reason to put pressure on the decision makers - to modify their current desires so that they make a decision which is better for the group. We as individuals may lack sufficient reason to modify those desires to the necessary degree.
I hope I've addressed the right question, even though I wandered back into the social pressures aspect again at the end.
Hi RichardW
ReplyDeleteWhen someone asks for an explanation one can either reply with what did motivate them or why they were justified in so acting. Hence a request for an explanation is a request for either the motivations or justifications. However it is important to distinguish between motivations and their justification.
Of course someone can just state their motivation if they think it is justified. Or the could invent a motivation - that they did not have- which they think would justified in the questioners eyes and so on.
The request for reasons is usually a request for the desires and beliefs that led to the action in question. Once can present a belief that one used to achieve one's desire - and justification can apply to beliefs too - however it is only desires that specify ends and that motivate. Beliefs can lead or mislead but are not the basis for a motive.
If you can show me a desire-independent reason I would be very interested to see what it is, assume I am looking for another fact of reality.
An unjustified desire is a result of an extended rational analysis of this end according to its affect on some other ends, as usual, unless you can propose an alternative these would be desires too.
This analysis is dependent on scope - whose desires. If the other desires are all the other desires of the person with the desire seeking justification then this is a question of prudence. If the scope if all desires affected whoever has them, this seems the best of the term moral.
There are, as as I am aware, only hypothetical oughts, I would be very interested if you can show me how a categorical ought can exist in reality.
"But we are still talking about the means for a person to achieve his/her own ends. So there is no reason why I ought to take into account other people's desires except insofar as that will help fulfill my own desires."
No reason at all unless they give you one - through social persuasion and legal power. Unless you internalise such other reasons, as your own desires, they are not going to count in your determinations. If persuasion does not work then power - penalties and sanctions come into play.
There are plenty of external reasons - other peoples desires you do not have, but if you interfere with their pursuit of their ends, you have given them a reason to inhibit you in your pursuit. This is just the same if they interfere with your ends, this is your motivation to inhibit them from so doing.
"Therefore this does not justify DU's claim that everyone's desires should be taken into account (on an equal basis)."
No, the issue is if you want to predict what will meet with social approval or disapproval, reward or penalties in the most general sense, then seeking a justification for your desire with respect to its fulfilment's effect everyone else desires will tell you that. When someone tells you something is wrong, this is usually what it means.
"OK, but the "extended rationality evaluation" does not lead us to DU. It just leads us to the rational pursuit of self-interest. And if that be accepted, your analysis tells us to each promote the moral claims that best achieve our own self-interest."
Extended rationality does not even lead to that, it just says your ends can be evaluated, by treating them as means.
Given this I am not sure how your conclusion follows.
I note you use the term "self-interest". Do you mean this is the narrow sense "interest in the self" or the broad sense "interest of the self". Narrow self-interest means mean one only has self-regarding desires as ends, well there is plenty of empirical evidence against this and this is unfalsifiable non-empirical claim. Or do you this is in the broad sense which includes other-regarding desires as ends (not just as means in the narrow sense of self-interest)?
"An alternative view of DU could be this."
I am not clear on what you are getting at here. DU in the descriptive sense is what people do anyway albeit corrupted by moral and other theories and so they do it in an incoherent (praise what is blameworthy, blame what is praiseworhty) and inconsistent (not praising all those who merit praise etc.) fashion. And of course many abuse power and persuasion if they can. An extended rational analysis of anyone's desire with unrestricted scope, can identify these abuses and so tell who is motivated to stop them. Whether you care or not is up to you, as it is for everyone else. However those who claim and want to be moral should care.
faithlessgod, thanks for your reply. I thought I understood your position fairly well, but now I realise I don't have any idea what DU is. I think we'd better start from scratch.
ReplyDeleteYou wrote: "DU in the descriptive sense is what people do anyway albeit corrupted by moral and other theories and so they do it in an incoherent (praise what is blameworthy, blame what is praiseworhty) and inconsistent (not praising all those who merit praise etc.) fashion."
OK, so one thing that DU is is a descriptive explanation of how people behave. What else is it? Is it also a rational programme of action, telling us what to do in order to achieve certain ends? Is it a moral calculus, telling us how to work out what it is moral to do? Both of these? Or something else entirely?
If it is a programme of action and/or a moral calculus, could you please summarise briefly what the programme and/or calculus say.
You wrote: "An unjustified desire is a result of an extended rational analysis of this end according to its affect on some other ends, as usual, unless you can propose an alternative these would be desires too."
I now have no idea what sort of rational analysis you are talking about. Could you please give an example.
(Luke tells me that your account of DU is correct, so I'm going to treat you as authoritative. I hope he or Alonzo will speak up if you get something wrong.)
P.S. In your first post, you wrote: "The specific use of the moral terms shoulds, oughts, right and wrong etc. can be part of the expressive and emotional force in persuading people to have such desires and not others."
ReplyDeleteDid you mean that moral terms play this role (and only this role) in DU?
RichardW,
ReplyDeleteNobody is "authoritative." Desire utilitarianism is not a corporation with a public relations secretary.
But Alonzo and faithlessgod have been thinking about DU longer than I have, and faithlessgod's comments on this post respresent DU fairly, I think.
I will keep trying to explain DU to you in different ways until you "get" it, and I hope Alonzo and faithlessgod will as well, since explaining the theory from different angles only helps to clarify it for everyone - including me!
In the meantime, I'll get back to drafting my latest email response...
There seems to be some confusion on the descriptive and prescriptive aspects of desire fulfillment theory.
ReplyDeleteHere are the two main points:
People act so as to fulfill the most and strongest of their desires, given their beliefs.
People seek to act so as to fulfill the most and strongest of their desires.
The desires provide the ends or the goals. False or incomplete beliefs sometimes get in the way - causing a person to act in ways that fail to achieve their ends.
Thus, we have reason to ask, "What should I do?"
Given a particular set of desires, and the possibility of false or incomplete beliefs interfering with the fulfillment of those desires, what should I do?
Or, "What would a person with my desires and true and complete beliefs do?"
Of course, one of the things that a person should do is, where practical, promote in others those desires that tend to fulfill other desires, and inhibit in others those desires that tend to thwart other desires.
We form a part of a community, with a common language. One of the things we recognize is that there are desires that tend to fulfill other desires, and desires that tend to thwart other desires. Those desires are maleable - we can promote and inhibit certain desires by bringing social forces to bear.
So, there is a subset of questions of the form "What should I do?" that effectively take the form, "What desires should we promote or inhibit in people generally?"
In the same way we ask, "What would a person with my desires and true and complete beliefs do?" it is also sensible to ask the question, "What would a person with those desires that people generally have reason to promote, and lacking those desires that people generally have reason to inhibit do?"
Would they torture?
On a minor matter...
ReplyDeletefaithlessgod wrote: "When someone asks for an explanation one can either reply with what did motivate them or why they were justified in so acting. Hence a request for an explanation is a request for either the motivations or justifications. However it is important to distinguish between motivations and their justification."
If someone replies with why they were justified in so acting, I would call that a justification, not an explanation. The online Stanford Encyclopedia of Philosophy speaks of explanatory and justifying reasons (plato.stanford.edu/entries/reasons-just-vs-expl/), and I personally find that clearer. But I'll use your preferred terms here (if it arises).
Hi RichardW
ReplyDeleteI am not more authoritative on this than anyone else including Alozno. We all expect him to be better because he has thought about this longer, as well as discovered it and is a very able communicator of ideas, better than me anyway.
However if DU to be promoted, we all need to get our own reasonable take on it and express it in our own words. It just might be that my style and approach makes more sense to you than Alonzo's. Lets see.
By "extended rationality" I mean that unlike "instrumental rationality" which does not evaluate ends, it does. However it still uses the same methods as instrumental rationality with only one additional step required - to treat an end as a means. This can be done with any end, by treating it as means with respect to other ends. Then the standard instrumental analysis (means-end analysis).
The question of application then becomes one of scope - which ends? If it is the other ends of the person whose end is being evaluated it is a prudential evalution - whether performed by the agent himself or an assessor.
Now if the scope is unrestricted then it is anyone's and everyone's ends without bias or exception. A label "moral" seems the most appropriate label give such an evaluation. Any assessor should if being epistemically objective come up with the same evaluation, granted the usual empirical challenges and issues. (And I learnt about what I call "extended ratoinality" here from Alonzo).
If we are starting from scratch we had better not prejudge the corruption issues noted in your quite of me. We just note that everyone uses persuasion and power to benefit their own ends. The question of the abuse of such power and persuasion is another way of framing the problem of morality, at least the one I am interested in addressing. That is morality is about how do we determine what is "abuse" and what to do about it.
DU is built upon is a descriptive explanation of how people behave. A combination of belief-desire psychology (are you familiar with that?), extended rationality and the desire fulfilment theory of value (are you familiar with this - at least Fyfe's version?).
"Is it also a rational programme of action, telling us what to do in order to achieve certain ends?"
Yes, in a sense. The descriptive analysis can predict (and descriptions improved by testing predictions) what will occur and these predictions become the basis of recommendations, that same as in any other empirical pursuit.
"Is it a moral calculus, telling us how to work out what it is moral to do?"
Yes in a sense, provided you mean moral as I noted above. I would not call it calculus like a felicific calculus though. However there is no direct utility to maximise, since value is plural and indeterminate - it is whatever people value - rather this addresses the impediments and aids in realising these values.
"If it is a programme of action and/or a moral calculus, could you please summarise briefly what the programme and/or calculus say."
I have started this above but note I find this comment system may not be the best way to proceed. Lets see if we are getting somewhere and then I could make a suggestion or two how to proceed better.
"I now have no idea what sort of rational analysis you are talking about. Could you please give an example."
If my end is to blow up the world, is this end justified? We evaluate my end by treating it as means against all ends affected, in this case everyone. How are their ends affected by my end-treated-as-means is it a means to help or hinder their ends? It clearly will hinder everyone's ends permanently. So my end has a negative justification when the scope is unrestricted - which I label (optionally) a moral evaluation.
The next step not asked, is does everyone have a reason to deter or encourage me from realising my end.
The above evaluation provides the answer. One could make a prediction here, that everyone would be motivated to stop me.
One way would be to tell me I am wrong. If I ask why then this could explained as in I am thwarting everyone's ends, as the above analysis shows. What if I do not care, I only care about my ends - this is where power and persuasion come in... Either I can be persuaded not have this end, failing that power is used to stop me, or if that fails I blow up the world.
Hello faithlessgod. Thank you very much for your reply, and for a lot of useful clarifications. The most important clarification for me was that DU is based on a rational evaluation of means and ends which is intially expressed without moral terms, and the use of moral terms is only justified (if at all) afterwards. This was what I had understood from your first post, but then became uncertain about after your second.
ReplyDeletePerhaps I should say something about where I'm coming from. I first heard of DU from from Luke in discussion at his blog, and what got my attention was his claim that DU makes moral claims that are objectively true and rationally justified. Since I consider this impossible (because of the is/ought divide) I have been challenging this claim.
> DU is built upon is a descriptive explanation of how people behave. A combination of belief-desire psychology (are you familiar with that?), extended rationality and the desire fulfilment theory of value (are you familiar with this - at least Fyfe's version?). <
No, I'm not familiar with those. But I have no reason to challenge DU's descriptive explanation, and for the sake of argument I'll accept that it's rationally justified and true.
On the rational evaluation of means and ends, I have two issues to raise. The first is I think a minor terminological one. I still dislike your choice of the word "justified" as applied to desires (or ends), but I have no objection as long as you're not attaching any ultimate significance to the word. Luke (if I've understood correctly) has called such desires "morally good" desires, which risks begging the question of what is moral.
The second issue is a fundamental objection:
> The question of application then becomes one of scope - which ends? If it is the other ends of the person whose end is being evaluated it is a prudential evalution - whether performed by the agent himself or an assessor. <
> Now if the scope is unrestricted then it is anyone's and everyone's ends without bias or exception. A label "moral" seems the most appropriate label give such an evaluation. Any assessor should if being epistemically objective come up with the same evaluation, granted the usual empirical challenges and issues. <
Before proceeding, I want to clarify two points. First, at some point we switched from talking about "desires" to talking about "ends" and I want to check whether you think that's a significant distinction. I'll assume for now that it isn't. Second, I assume the "ends" you're referring to here are only ultimate ends. If you were including ends-as-means, then the prudential evaluation would have to consider the ends of people other than the subject of evaluation. (By the way, the prudential case is what I had in mind earlier when I referred to "pursuit of self-interest".)
Now my objection is that there is no rational justification for the choice to consider everyone's ends "without bias or exception", i.e. to treat the ends of every person equally. There is no justificatory reason for a slave-owner to treat slaves' ends on an equal basis with his own.
This is the point I was trying to make in my first reply to you, when I pointed out that hypothetical oughts are only concerned with "the means for a person to achieve his/her own ends".
So I conclude that DU's unrestricted rational evaluation lacks an objective basis, and so do any moral claims based on it.
RichardW
ReplyDeleteFirst a correction my last post was written in haste and I want to replace the first paragraph of that reply with
"I am not more authoritative on this than anyone else. I regard as Alonzo as authoritative in the sense he has thought about this longer, as well as discovered it and is a very able communicator of ideas, better than me anyway. This does not mean I might not agree with him on specific points or applications of DU"
Now to your last comment.
Since I consider this impossible (because of the is/ought divide) I have been challenging this claim.One needs to make an is-ought distinction but it is un-empiricsl to define away the possibility a priori.
As Hume noted arguments that derive "ought" from "is" need an explanation to make their case. However it is quite acceptable to derive an ought from an ought and that is all DU does. All desires are in this sense "oughts".Hence:
A desires that P
Action X is the only means to bring about P
Then A ought to X
and of course there are no categorical oughts.
Luke (if I've understood correctly) has called such desires "morally good" desires, which risks begging the question of what is moral.Once you know what Luke means you should understand how he is using it. We are talking about the evaluation of desires, the value of value, evaluation naturally uses terms good and bad, justified and justified. You can call this moralDU or goodDU or whatever, they are just a useful shorthand here, although they do provide a better explanation of conventional use of these terms than any other I have seen to date.
First, at some point we switched from talking about "desires" to talking about "ends" and I want to check whether you think that's a significant distinction.Only desires specify ends, beliefs do not. The combination of beliefs and desires comprise a reason to act, the reason to act that wins comprises the intention, which is carried out in an intentional (voluntary) action.
What you call ultimate ends in ethics is now called final ends (to avoid equivocation over intrinsic). It is these I am specifically referring to.
If you were including ends-as-means, then the prudential evaluation would have to consider the ends of people other than the subject of evaluation.Then this is not a prudential evaluation as I stipulated it - the scope is restricted to the agent. So I am not sure what you mean by ends-as-means here unless you mean instrumental means?
Now my objection is that there is no rational justification for the choice to consider everyone's ends "without bias or exception", i.e. to treat the ends of every person equally. There is no justificatory reason for a slave-owner to treat slaves' ends on an equal basis with his own.This was not the qeustion being dealt with. That it is quite possible to perform such an evaluation is all that was being established.
One needs additional assumptions and justifications in order for the slave owner to evaluate his ends-as-means without including his slaves etc., owever natural and acceptable it appears to him. Rationally he needs to justify the exceptions and biases. More work is required to do that.
This is the point I was trying to make in my first reply to you, when I pointed out that hypothetical oughts are only concerned with "the means for a person to achieve his/her own ends".There is no a priori reason for why hypothetical oughts need to be restricted that way. What is the reason add this restriction?
So I conclude that DU's unrestricted rational evaluation lacks an objective basis, and so do any moral claims based on it.I fail to see how this conclusion follows. Your argument lacks the additional reasons and justification to make these restrictions. Maybe you can successfully supply them, but that presupposes that such an unrestricted analysis is already objective, you additons would be parasitical on this framework.
That is how could any restricted version with additional entities or assumptiosn be objective, if this is not already?
It is certainly possible to carry out such an analyis, and be done with as much empirical rigour as possible in this, with issues no different to any other empirical practise.
> One needs to make an is-ought distinction but it is un-empiricsl to define away the possibility a priori. <
ReplyDeleteI agree. I wasn't defining it away, only expressing my revisable judgement.
> Only desires specify ends, beliefs do not. The combination of beliefs and desires comprise a reason to act, the reason to act that wins comprises the intention, which is carried out in an intentional (voluntary) action. <
Thanks for explaining your terms. I'll endeavour to use them the same way.
> Then this is not a prudential evaluation as I stipulated it - the scope is restricted to the agent. So I am not sure what you mean by ends-as-means here unless you mean instrumental means? <
I'll try to explain more carefully what I meant. And I'll revert to the language of desires and actions, because I'm concerned that the switch to the language of ends and means may have caused a problem.
Consider a rational evaluation E that aims to determine for a given agent which of his possible actions will best fulfill his desires. If it is to be as sound as possible, it should take into account all relevant information about the world, including the desires of other people, since their desires will affect their behaviour and may therefore affect the outcome of his actions. Furthermore, his actions may influence other people's desires, having a further effect on the outcome of his actions.
Now consider a rational evaluation E- that is the same as E except that it omits any consideration of other people's desires.
I was concerned with the question whether your prudential evaluation was E or E-, and I took it be E. Perhaps it is actually neither of those, in which case would you please elaborate on it. Could you also please tell me, if I replaced the language of desires and actions with the language of ends and means in the account above, would it make any significant difference?
A couple of potential issues that I realise I haven't addressed here:
1. I've treated all the agent's desires as a set. Perhaps I need to consider them individually. But I think we can assume the agent has a set of priorities which allows outcomes to be assessed in terms of maximising a weighted average over all of his desires.
2. I haven't considered the possibility that the agent's actions will lead--directly or indirectly--to changes in his own desires.
To clarify a couple of other terms: I used "ultimate ends" to mean the ones which the rational evaluation is aimed at achieving (the agent's in this case), and "ends-as-means" to refer to the other relevant ends (other people's in this case). Better terms for these concepts would be welcome, if you have any to hand.
I'll wait till we've sorted out these terminological questions before proceeding to the more contentious problem of your "unrestricted" case (which I assume is the evaluation you refer to as "extended"). I'm saving your other replies until then. They haven't been ignored. ;)
P.S. Re-reading your last couple of posts, I now think I understand you much better. I can see that I caused tremendous confusion by using the term "ends-as-means" in a way quite different from how you had previously used it. I'm sorry for that. I think if you can just confirm that E does describe your prudential evaluation, then I'll be ready to continue.
ReplyDeleteHI RichardW
ReplyDeleteI have not read yor last two posts but wanted to say that I am finishing of a debate on an entirely different topic and will get back to you next week on this.
I have two more posts on the topic below and then it depends on any responses.
If you, or anyone here is interested see A debate with Tom Gilson on Euthyphro and GOd
RichardW and faithlessgod,
ReplyDeleteI love the conversation you are having here; thank you both.
RichardW, every time you quote me, you make it sound like I said something I at least did not intend to say, which confirms your assertion that I did not make myself clear to you. I'm glad faithlessgod's ways of putting things are making more sense to you.
Keep it up, guys!
RichardW
ReplyDeleteI was using what I labelled "prudential" evaluation as an example of a certain scope. I did mean what you call E- and not E. There are still two versions of prudential evluation (a)compared to all other desires the agent currently has and (b)compared to their future desires too. So their are different scopes possible with extended rationality and the largest scope I call unrestricted which evaluates a desire under examination againt everyones' (extending the scope to animals as another question and would muddy the waters now). All I wanted to establish with extended rationality is that an unrestricted scope evaluation is possible and is just as empirical - with all the usual challenges of that - hence objective as with any other scope.
Back you
Alonzo
I humbly request again you switch on embedded comments. It is one switch in you blogger dashboard settings, takes less than a minute and is quite stable. It certainly helps in the longer comment threads such as this one.
Luke,
ReplyDelete> RichardW, every time you quote me, you make it sound like I said something I at least did not intend to say, <
I'm sorry. I don't mean to.
> ...which confirms your assertion that I did not make myself clear to you. I'm glad faithlessgod's ways of putting things are making more sense to you. <
Well, faithless good didn't undertake the task of defining moral terms. Given that you undertook what is (in my opinion) an impossible task, it's not surprising that we didn't make any progress. ;)
faithlessgod
ReplyDeleteOK. I think I get you now. I'm really only interested in the distinction between single-agent evaluations and multiple-agent evaluations, because that's where the question of competing interests (and therefore "morality") comes in. By single-agent I mean that the evaluation is only concerned with the fulfillment of that one agent's desires. Other people's desires are only taken into account in so far as they have any effect on the fulfillment of the agent's desires. Let's assume for the sake of argument that we're talking about the widest possible scope within each of these two categories. Now I'm ready to make my argument again more clearly, and address your previous objections.
In a multiple-agent evaluation, some formula is needed to weigh up the competing desires of different agents. If we were talking about evaluating the morality of actions we would call this formula a moral calculus, but I'll use the word formula so as not to prejudge the question of whether we are talking about morality. Single-agent evaluations do not require such a formula (I'll return to this later).
DU tells us to use a particular formula for multiple-agent evaluations, one of the features of which is that everyone's desires are treated equally. If DU's system of evaluation is to be considered rationally justified (as its proponents claim) then the choice of formula needs to be rationally justified. I've seen no such justification, and I don't believe one is possible.
> It is certainly possible to carry out such an analyis, and be done with as much empirical rigour as possible in this, with issues no different to any other empirical practise. <
Regardless of how rigorously you apply the formula and how empirical the input data are, if the formula itself is not rationally justified then neither are the results you get from applying it.
> One needs additional assumptions and justifications in order for the slave owner to evaluate his ends-as-means without including his slaves etc., owever natural and acceptable it appears to him. Rationally he needs to justify the exceptions and biases. More work is required to do that. <
Since you require the slave-owner to justify his choice of formula, you seem to recognise the need for justification. But why does the slave-owner need to give one? That presupposes that he has a reason to undertake a multiple-agent evaluation at all. You need to justify your choice of formula because you are claiming that a multiple-agent evaluation is rationally required. The slave-owner is not making any such claim.
> That is how could any restricted version with additional entities or assumptiosn be objective, if this is not already? <
No multiple-agent evaluation can be fully objective, in my opinion, because there are no empirical facts that give rise to a formula for weighing competing agents' desires.
You may ask how a single-agent evaluation handles the problem of weighing multiple desires within one agent. In that case no prior formula is needed. The agent's formula is inherent in the facts about his desires. The relative importance to him of his various desires is a matter of empirical fact (given that desires are matters of empirical fact).
Hi RichardW
ReplyDeleteGlad we are making some progress.
I would say there are two not one distinctions between single-agent, group of agents, universe-of-agents Your multiple-agents can oscillate between the last two categories. I will assume that your multiple-agents means universe-f-agents? Both these distinction are relevant to competing interests.) I look at morality as questions about conflict avoidance, evasion and resolution, cooperation, competition and collaboration, so I want a framework that can objectively assess these and this unrestricted extended rationality framework provides this.
The requirement for some formula for multiple-agent evaluations is an added restriction on a n unrestricted scope analysis. The default is to weight them equally - on the principle of parsimony - with no bias or exceptions both of which need additional argument and evidence to establish.
This default is the most general framework to rationally evaluate desires within which different biases and exceptions can also be rationally evlauted as well as the desires themselves - but all arguments for biases and exceptions are driven by desires too (or the lack of them).
Since any other rationally justified version assumes this framework and adds additional constraints, it assume that this framework is rationally justified.
How else could any more restrained version work otherwise?
"Since you require the slave-owner to justify his choice of formula, you seem to recognise the need for justification."
We can use this framework to see what justifications the slave requires for his evaluation requires - whether explicit or tacit, argued for or assumed.
Whatever justifications the slave owner gives is whatever he gives. We are not saying here that he needs to give one. Just that if he were we want the most impartial means to evaluate which this framework provides. We can use it to compare and contrast justifications from different slave owners, dependants, manager, workers slaves, free citizens. Selecting only the slave owners view and excluding everyone else's would be a bias and a rational and objective approach seeks to identify and eliminate bias.
Thsi framewor is a means to see whther anyone has a justification for the biases they have in doing the evaluations.
"No multiple-agent evaluation can be fully objective, in my opinion, because there are no empirical facts that give rise to a formula for weighing competing agents' desires."
Another reason not to weigh different desires differently. Your argument makes no sense, how is this not objective, it is constructed from seeking the most objective way of understanding these various interactions.
"You may ask how a single-agent evaluation handles the problem of weighing multiple desires within one agent. In that case no prior formula is needed. The agent's formula is inherent in the facts about his desires. The relative importance to him of his various desires is a matter of empirical fact (given that desires are matters of empirical fact)."
You seem to addressing a different question now.
If you are arguing for a world where agents only apply single-agent evlaution you need to be clear how these are done - with narrow self-interest or enlightened self-interest and other key ssumptions. All these assumptions and others can be tested under such a framework as the one suggested here, but you have suggested no alternative at all. If you arguing for an alternative what is it?
> Glad we are making some progress. <
ReplyDeleteMe too!
> I would say there are two not one distinctions between single-agent, group of agents, universe-of-agents Your multiple-agents can oscillate between the last two categories. I will assume that your multiple-agents means universe-f-agents? <
Well, it doesn't really matter to me, because my arguments apply to all of your multiple-agent evaluations (MAEs). But, for the sake of being specific, let's say that it's your universe-of-agents.
And when I refer to a single-agent evaluation (SAE), I mean the widest-scoped SAE. I assume an agent is conducting a SAE because he wants to know the best way to fulfill his own desires, so he will want to take into account all the factors that he possibly can. And just to be absolutely clear, by "single-agent" I mean that only that agent's own desires are considered as final ends; other people's desires are only considered as means to his ends.
> The default is to weight them equally - on the principle of parsimony - with no bias or exceptions both of which need additional argument and evidence to establish. <
I'm going to take that as a recognition that your choice of equal weights needs to be justified, and that your justification is an appeal to parsimony.
If you don't mind, I'm going to put the question of weights on hold, and switch to the logically prior question of why anyone would carry out a multiple-agent evaluation (MAE) in the first place. I've realised that would have been a much better place to start.
> Whatever justifications the slave owner gives is whatever he gives. We are not saying here that he needs to give one. Just that if he were we want the most impartial means to evaluate which this framework provides. <
"If he were...". But why would he? Your MAE is supplying the answer to a question that no one has any motivation to ask. If there are no motivating reasons to employ your MAE, and no justificatory reasons (and you haven't given any), then there is no reason to employ it. In that case, what's the point of it?
Of course, there could be a motivating reason if you persuaded someone to adopt your system of MAEs. That act of persuasion has changed his desire set to include the desire to follow MAEs. Next time he needs guidance he may carry out a MAE (if the desire is strong enough). Let's say the MAE tells him to do action A. But he also has a desire to do B. What to do? The rational thing is to conduct a SAE which tells him whether to do A or B. Effectively, in this case, the MAE has just become one input to his SAE. He still has no motivation to conduct an MAE as his primary method of evaluation, only as part of his SAE. It's the SAE that looks at the larger picture (for him), taking into account all his desires, of which the desire to act on a MAE is just one.
In any case, since we're talking about a motivating reason here, not a justificatory reason, the question of rationality doesn't come into it. He could have adopted any MAE that suits him, such as one that gives lower weights to the desires of slaves.
> If you are arguing for a world where agents only apply single-agent evlaution you need to be clear how these are done - with narrow self-interest or enlightened self-interest and other key ssumptions. <
I'm not arguing in favour of any world. I suppose I'm saying that the way the world is is that a person acts only in accordance with his desires, if you include all motivating factors as desires. (Isn't that what Alonzo says too?) So desires include the desire for the welfare of others (empathy) and the desire to act in accordance with the dictates of one's conscience ("morality"). The most rational guide to follow is the best available SAE, as that enables me to best fulfill my desires.
If you are still not convinced, I'm going to say the ball is in your court, and ask you for a rational justification (a justificatory reason) for employing a MAE.
P.S. A SAE answers the question "What is the best way to fulfill my desires?"
ReplyDeleteWhat question does a MAE answer (if any)?
P.S. A SAE answers the question "What is the best way to fulfill my desires?"
ReplyDeleteWhat question does a MAE answer (if any)?
I hate to interrupt, but the question that a MAE answers is pretty straight-forward. "Given that everyone uses SAEs, what desires should I try to give people in order to best fulfill my desires?"
Hi RaichardW
ReplyDeleteI think we are talking at cross purposes here.
I was talking about extended rationality and that is all. Extended rationality is framework to assess final ends that is it.
It shows how, for a given question what is the relevant scope and how to rationally determine what is justified or not, to assess or evaluate justificatory claims. It makes no sense to argue about whether extended rationality is justified or not.
You seem to be confusing motivations with justifications. No-one here denies, indeed most assumes, that everyone seeks to substitute a more fulfilling state of affair for a less fulfilling one. They do this by seeking to fulfil the more and stronger of their desires, and acts to do this, given their beliefs. How they do this is called practical reasoning.
Extended rationality applies to any evaluation of ends, including assessment of any justification of ends. Practical reasoning - means-end or instrumental rationality - cannot and does not address these questions.
For example an assessor can sometimes provide a better evaluation of an agent's prudential justification than the agent can, classically in the many examples of addictions, or the mentally ill or the suicidal. Extended rationality provides the framework to do this.
The same goes for assessment of sporting teams by players, coaches fans, spectators. And for one group to another,between group, individuals and groups, between companies, between countries and so on.
Of course, whatever the evaluation by assesor on the agent, team or group etc. makes no differene if the relevant parties do not internalise or expunge the resultant justified reasons so that they become motivating reasons. How to do that is a separate question from extended rationality.
It makes no sense that we can do all that and simply deny that there is no such thing as a universal scope or unrestricted scope. All the above analysis are enabled from such an initial impartial and objective perspective and that is through that that the scope of the evaluation in question is determined.
How ethical and moral claims relate to a universal scope or a restricted scope is a point of discussion, which we can do within extended rationality, once we know how it works.
Reading between the lines, your, to me, strange warping of this, if you don't mind, to SAE versus MAE and your peculiar point you are trying to make that that extended rationality is not rationally justified looks like you are coming from a position on psychological egoism - even tough this has no position on extended rationality. If you are not arguing for an implausible psychological egoism then please enlighten me as to what your key issue is, as I otherwise have no idea.
Hi again, faithlessgod.
ReplyDelete> I think we are talking at cross purposes here. <
Yes, I'd better ask you some questions to clarify my understanding of what you're saying.
1. You've mentioned a number of methods of evaluation with various scopes. As I understand it, these are methods I can use to tell me what action to perform in a given situation. Could you please confirm if that's correct. If it isn't, please tell me what is the purpose of conducting this type of evaluation?
2. You've used the terms "extended rationality" and "extended rationality evaluation". I had previously understood this as referring to a sub-set of your methods of evaluation, i.e. that your methods could be divided into two types: basic and extended. However, in your latest post you wrote: "It [extended rationality] shows how, for a given question what is the relevant scope." This suggests that "extended rationality" also refers to a process of selecting between your various methods of evaluation. Is that correct? Please give some examples of the questions you have in mind and the corresponding relevant scopes.
It seems like this would address the question I asked earlier:
> P.S. A SAE answers the question "What is the best way to fulfill my desires?"
What question does a MAE answer (if any)? <
3. Would you agree that your various methods of evaluation can be divided (without overlap) into SAEs and MAEs, i.e.
(a) those methods that consider the desires of only one person as final ends (ends in themselves), and
(b) those methods that consider the the desires of more than one person as final ends.
Don't worry for now about whether that's a useful dichotomy. (I think it's useful to my argument.) I just want to know whether it is a valid dichotomy, i.e. that each of your methods can be assigned to one and only one of those categories.
4. You wrote: "It makes no sense to argue about whether extended rationality is justified or not." I'm only arguing that MAEs are not rationally justified. You're not equating extended rationality with MAEs, are you? As I understood it, some types of SAE also use extended rationality.
Hi Eneasz. I'm glad this discussion is of interest to onlookers.
ReplyDeleteYou wrote:
> I hate to interrupt, but the question that a MAE answers is pretty straight-forward. "Given that everyone uses SAEs, what desires should I try to give people in order to best fulfill my desires?" <
That's a question about how to fufill my own desires, so it's answered by the best possible SAE. (A SAE that doesn't answer that question isn't the best possible SAE.)
Hi RichardW
ReplyDeleteA.First of all are you coming from a position of psychological/ ethical egoism or not. If not, what position are you coming from?
1. All a rational analysis of any means or end can tell is the value of that is with respect to some specified ends. There is one type of method, different scopes address different ends to evaluate something against. These provide reasons to act given the question at had which specifies the scope. Whether the agent(s) do this or not, agree with it or not, is another matter.
2. "extended rationality" is the framework for evaluating means and ends. Instrumental rationality is not so much a sub-set of extended rationality rather it is a specific set of scopes. I have already given various examples of scope.
You said:"P.S. A SAE answers the question "What is the best way to fulfill my desires?"I thought I addressed this in my last comment
You said:"What question does a MAE answer (if any)?"Any time anyone makes claims, assesses or appraises claims about others ends, groups or singular, they are specifying the scope of agents involved - say a team, a religion competing political parties or whatever they are using extended rationality. We can compare and contrast their claims using extended rationality, practical rationality cannot do this.
3. "Would you agree that your various methods of evaluation can be divided (without overlap) into SAEs and MAEs",No I do not agree as the above should make clear, you are still confusing motives and justifications.
4. "I'm only arguing that MAEs are not rationally justified. This makes no sense! If someone is making an MAE claim then what else is there to use but extended rationality? The MAE claim may or may not be rationally justified, that is what extended rationality can empirically and objectively show.
P.S. faithlessgod, I think it would help if we got more specific. Please list all your different methods of evaluation, i.e. all the possible "scopes" you can think of. If the list would be too long, just list the half-dozen or so most important ones. Then indicate which of those you refer to as "extended". Please also tell me what question each of them is intended to answer. I will respond by telling you which of them I call SAEs and which MAEs (or you could do that yourself based on my definition, and I'll tell you if I agree).
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteIt may be helpful to summarise my current position as simply as I can, without using any special terms.
ReplyDeleteGiven that we are only motivated by our desires, the rational policy to follow is one which maximises the fulfillment of one's desires. This includes desires for other people's welfare and desires to be moral. The perfectly rational policy would be one which takes full account of all available information about the world, including other people's desires. But only one's own desires are treated as final ends. Other people's desires are only treated as means to achieving one's own ends.
In practice, we can only approximate to the perfectly rational policy, given the limits of our knowledge, our inferential abilities and the time we're willing to spend analysing the situation. Apart from these sorts of limitations, deliberately choosing some other policy would not be rational.
Some of the policies (methods of evaluation) that faithlessgod is proposing involve treating other people's desires as final ends. Since this is a deliberate deviation from the rational policy, they are not rational policies.
I have been answering your question how about you do me the courtesy of answering mine. Is you position that of psychological egoism or not, if not what is it?
ReplyDeleteYou seem to be bizarelly fixated that one cannot evluat one person's ends against another as if this is not rational but this makes no sense. We do it all the time.
The reasons why you like one sport or another and one team or another are your ends but one you have such an interests the rational evalutation of the time is independent of your ends. You are evaluating the players and the manager against the requirement of the sport - who it wins games, leagures and cups.
And the same goes for many other endeavours such as science.
You write as if it is only possible to have agent-relative reasons and anything else such as agent-neutral and assessor-neutral reasoning is impossible, but this is absurd.
Science would not exist if this were the case. Science's epistemic objectivity requires agent-neutral and assessor-neutral reasoning on behalf of the scientist otherwise they are not being (epistemically) objective - they are letting their ends corrupt the experiments or results. That they have certain ends so they have become a scientist and are pursuing a certain research is their ends but to confuse the two is to commit the genetic fallacy.
There are many names for what I am talking about I just like the label extended rationality which is not an original invention, many references on the internet are similar - but not all.
Hi faithlessgod,
ReplyDelete> I have been answering your question how about you do me the courtesy of answering mine. Is you position that of psychological egoism or not, if not what is it? <
I've only just seen your previous post. Our last couple of posts crossed (from my point of view). I'm not familiar with the term psychological egoism, so I'll leave it to you to judge whether the view I expressed can be so called.
> 3. "Would you agree that your various methods of evaluation can be divided (without overlap) into SAEs and MAEs", No I do not agree as the above should make clear, you are still confusing motives and justifications. <
I don't understand your objection. All I'm trying to do is divide your evaluations (scopes) into two categories. If the way I've described my division is unclear or incoherent, please list your various scopes, and I'll indicate which ones I consider to be in each category.
> You said: "What question does a MAE answer (if any)?" Any time anyone makes claims, assesses or appraises claims about others ends, groups or singular, they are specifying the scope of agents involved - say a team, a religion competing political parties or whatever they are using extended rationality. We can compare and contrast their claims using extended rationality, practical rationality cannot do this. <
I get that an MAE assesses ends. Using your earlier terminology, I assume you mean it evaluates how "justified" they are. But then what? Does it give us any guidance on how to act? If so, then there must be another question that it is answering (about how to act). If not, what use is it?
> This makes no sense! If someone is making an MAE claim then what else is there to use but extended rationality? The MAE claim may or may not be rationally justified, that is what extended rationality can empirically and objectively show. <
But what is an MAE claim? If it's a claim about whether an end is "justified", then why should I be interested in such a claim? Remember, we haven't yet introduced any concept of "morality", so "end X is justified" is not a moral claim. And an end is not a proposition or an action, so we can't talk about whether it's rationally justified in the senses we normally mean by that. What does "justified" mean in this context?
By the way, could I ask you please to check your posts for typos. They often make it hard to understand you.
Would an MAE claim be something like, for instance, "This school wants to give its students the best education it can"? If every individual that can be considered part of "this school" agrees in their own SAE with giving the students of the school the best education they can, it's probably a true MAE claim.
ReplyDeleteIf not, I think I'm completely lost in your arguments . . . but nonetheless fascinated.
I forgot a couple of things.
ReplyDeleteQuestion: if I'm conducting a SAE which takes into account other people's ends purely as means to the agent's end, do you call that an "extended" evaluation?
> You are evaluating the players and the manager against the requirement of the sport - who it wins games, leagures and cups. <
Sure, that's no problem. You're evaluating those things with respect to a certain end, e.g. how good the player is at winning games. In the case of an MAE, what is the end with respect to which you are evaluating?
RichardW wrote:
ReplyDelete"Given that we are only motivated by our desires...This includes desires for other people's welfare and desires to be moral...But only one's own desires are treated as final ends. Other people's desires are only treated as means to achieving one's own ends."
This is true. Each person seeks to act so as to fulfill the most and strongest of his or her own desires. The desires of another person cannot be the direct cause of our intentional actions. We may consider the desires of others, but only insofar as the desires of others are the objects of our own desires.
"...the rational policy to follow is one which maximises the fulfillment of one's desires."
I am willing to simply define practical rationality as rationality based solely upon the desires of the agent.
"The perfectly rational policy would be one which takes full account of all available information about the world, including other people's desires."
No. There can be irrelevant data. I do not need to know the number of planets that existed in the universe 5,987,234,983 years ago to be rational. Indeed, this would make practical rationality wholly impractical.
"In practice, we can only approximate to the perfectly rational policy, given the limits of our knowledge, our inferential abilities and the time we're willing to spend analysing the situation."
True. Except for the phrase "willing to spend". It may be the case that a person may not be willing to spend time that he should spend. However, it is still the case that we have a limited amount of time available.
"Apart from these sorts of limitations, deliberately choosing some other policy would not be rational."
But why should we be rational in this sense?
If we ask, "Why practical-rational-should I be rational in this sense," the question is a tautology.
But we could ask if there are other senses of "should" other than the practical-rational sense.
With respect to location, I can only speak intelligibly about how I would get to some other location by looking at directions that start (at least implicitly) at my current location.
However, I can still speak intelligibly about directions from other locations other than my own. I could give my brother, for example, instructions on how to get to a location by considering his current location.
I can even speak intelligibly about the location of an object relative to a group of people. I can be in a conversation about people living in Mexico and speak intelligently about the fact that they would have to travel north to get to the United States.
Just as it makes sense for me to consider relationships between the location of things and the location of people other than myself, it also makes sense for me to consider relationships between states of affairs and desires other than my own. It is sensible for us to build a language where we can talk about these relationships.
"Some of the policies (methods of evaluation) that faithlessgod is proposing involve treating other people's desires as final ends."
Well, I need to look at the methods of evaluation that faithlessgod is proposing more closely. However, it does make sense to treat other people's desires as their final ends. In fact, it would be a mistake not to.
Your desires cannot motivate my actions directly. However, in predicting what will result from my actions I do have to consider the fact that you will treat your desires as ends for you.
Considering the answer you gave below:
ReplyDelete> I hate to interrupt, but the question that a MAE answers is pretty straight-forward. "Given that everyone uses SAEs, what desires should I try to give people in order to best fulfill my desires?" <
That's a question about how to fufill my own desires, so it's answered by the best possible SAE
.
And your summary of your position:
only one's own desires are treated as final ends. Other people's desires are only treated as means to achieving one's own ends.
I can't see where we differ. This is what DU states as well. It looks like we agree and are merely arguing over word definitions, which - as Alonzo pointed out - is one of the greatest wastes of effort in ethics.
You've stated that the best possible SAE will answer what desires to try to instill in others. And you seem to be asserting that the best possible SAE is never used (due to limited resources/knowledge/etc) but people strive to use the SAE most closely approximating the best possible SAE.
I submit to you that Desire Utilitarianism is simply the search for the best possible SAE (or at least, the search for the portion of the best possible SAE that answers "what desires should I give others?"). And that it is the most effective method created so far in approximating that ideal SAE.
RicahrdW
ReplyDeleteI really do not understand your issue is. I think your SAE/MAE terminology is confusing you.
We all agree on what motivates anyone, there is no issue on this.
You seem to be obsessed with denying that you or anyone can rational assess the interactions of others by evaluating one agent's ends against others.
This is what we do all the time. We make predictions of others behaviours doing this and assess the results of their and our predictions and learn and update our abilities to do this as a result of this.
We analyse a teams performance as a team and how individual players perfomed and evaluate whather they could have done better. Did the manager execute the right strategy, was he justified or not etc.
We do this in film, TV and literature. We do this on our friends, work colleagues, family and strangers. We use all this in predicting how others will react to us, whether the are individuals, groups or people in general - when we apply the same methods to our own intentions and actions. We use this extended rational analysis as part of our belief set in selecting and pursuing our ends. Some are better able to do this than other depending on how much they pay attention others and learn from their predictions both from afar and in their own interactions.
Everytime you think of an interaction between friends and think A and B are not going to like what C is doing, you are perfoming a extended rational anlaysis.
Sufferers from autism and less so aspergers are impaired in such social skills.
So armed with this knowledge we may modify our choices or may not, for example A may go ahead with some plan knowing everyone will complain and plan to deal with their complaints etc. B has not paid attention goes ahead and is unprepared frrom other reactions,
C decides not to pursue this end,
D might try and find a way of doing this without anyone knowing, E might try to convince others they are mistkaen in teir disapproval and so on. All of this informed or not by prior extended rational assessments and precitions others reactoins.
A better informed and a better predictor person would know more accurately know whether their actions will meet with approval or disapproval from the relevant other agents, that is whether others will think their actions justified or not.
I think you are creating a false dichotomy between your SAE and MAE.
Eneasz
ReplyDelete"I submit to you that Desire Utilitarianism is simply the search for the best possible SAE (or at least, the search for the portion of the best possible SAE that answers "what desires should I give others?"). And that it is the most effective method created so far in approximating that ideal SAE."
Exactly!
Now I have come to dislike the name Desire Utilitarianism, sorry Alonzo, apart from typing it, it makes Du look like another competing moral theory when, in my view it is not.
It is what remains once one realises one does not need any of the inadequacies of both moral objectivity (relying on non-existent facts) and moral subjectivity (relying on opinions), moral relativism (saying nothing really) and moral non-cognitivism (irrational semantic contortions).
I am suggesting a new name, one easier to type and less prone to confusion with other utilitarianisms - "Desirism".
RichardW
ReplyDeleteI would like to build on my earlier post discussing the great deal of similarity and the small differences between your sense of rational 'ought' to build a sense of moral 'ought'.
Following the model of the physicist (who likes to talk about frictionless pullies and massless strings in illustrating points regarding force), let us take a simple society.
There are three individuals.
Agent1 and Agent2 both have a desire to scatter stones.
Agent3 has a desire to gather stones together.
Agent3 cannot gather stones as fast as Agent1 and Agent2 can scatter them, so the two agents must devote a portion of their time to gathering their own stones.
Agent4 is about to join their community. Agent1, Agent2, and Agent3 have the ability to choose what Agent4 will desire.
Agent1 and Agent2 both have reason to act so as to give Agent4 a desire to gather stones. Agent3 has no reason to act whatsoever. Regardless of whether Agent4 has a desire to gather or to scatter stones, Agent3 will still have the opportunity to spend his time gathering stones.
So, now, it is objectively true that there the bulk of the reasons for action favor giving Agent4 a desire to gather stones.
Even Agent3 can recognize this fact. This does not mean that Agent3 is motivated to give Agent4 a desire to gather stones. In fact, he has no such motivation. Yet, he can recognize the truth of the claim that there is more reason to give Agent4 a desire to gather stones than there is to give Agent4 a desire to scatter stones.
And he can talk intelligently about that fact.
Thanks to everyone who has helped finally make this clear to me!
ReplyDelete> "I submit to you that Desire Utilitarianism is simply the search for the best possible SAE (or at least, the search for the portion of the best possible SAE that answers "what desires should I give others?"). And that it is the most effective method created so far in approximating that ideal SAE." <
Well, I wish someone had told me that when I first encountered DU. It would have saved me a huge amount of time, and also saved you guys a lot of time spent in fruitless arguments with me. Looking back now, I can see that that was what you were telling me from the start, faithlessgod, but there were elements of your terminology and explanations which were rather misleading. (I won't deny that my own posts were often far from perfectly clear too!) Also, I was misled by my prior expectation that DU was something more than that.
Now that's settled, then the next question is what does this have to do with morality? It seems DU is not a moral calculus at all. Instead, it tells me how to create the moral calculus that will best fulfill my own desires (including my altruistic desires and the urges of my conscience). If I'm a slave-owner with a desire to continue reaping the benefits of slavery (and no contrary desires), DU will probably tell me to espouse a moral calculus that disregards the desires of slaves.
RichardW
ReplyDelete"Now that's settled, then the next question is what does this have to do with morality? It seems DU is not a moral calculus at all. Instead, it tells me how to create the moral calculus that will best fulfill my own desires (including my altruistic desires and the urges of my conscience). If I'm a slave-owner with a desire to continue reaping the benefits of slavery (and no contrary desires), DU will probably tell me to espouse a moral calculus that disregards the desires of slaves."
After this you have either not understood anything we are talking about and/or I am beginning to doubt you are really interested in constructive conversation.
If I'm a slave-owner with a desire to continue reaping the benefits of slavery (and no contrary desires), DU will probably tell me to espouse a moral calculus that disregards the desires of slaves.
ReplyDeleteActually, that is incorrect. If you picture DU as most effective method to find the best possible SAE (or at least the portion of the best possible SAE that answers "what desires should I give others?"), you will realize that DU will tell you to espouse a moral calculus that will lead you to the conclusion that slavery should be discouraged/abolished (yes, even as a slaver owner that profits from them).
There are many reasons for this. One of them, naturally, is that the slaves, and friends/family of the slaves, may attempt to do you harm. However I view this as a nearly-inconsequential reason.
The more important reason that DU will lead the slave owner away from slavery is because one of the desires identified by DU that everyone always has many reasons to promote is a respect for individual liberty. Strengthening this desire in those around you greatly increases the odds that you will be free to pursue your desires as you please, as they will have aversions to restricting what you do.
Another one of these universally-usefull desires is empathy for others. The stronger you can make other people's aversion to seeing their fellow man in pain, the greater the odds are that they will go out of their way to make sure they do not hurt you, and even expend their resources to help you when they see you are in pain.
Given that you have strong reasons to promote these desires in others, it follows that they have strong reasons to promote these desires in you. Therefore if those around you are at all succesfull in applying DU, you will be instilled with these desires as well, and will come to abhor slavery on your own. It will become a desire of yours to not be part of such a deplorable system. This will be in opposition to your desire to exploit the cheap labor of slaves. If the morality of your society is advanced enough, your desires for liberty and empathy will eventually grow stronger than your desire for cheap labor.
Also of note is that it's entirely possible that I could be wrong when I apply DU like this. Perhaps it is not true that a desire for liberty, and an aversion to seeing others in pain, are things to be strongly encouraged. However these are empirical questions that can be answered objectively. And, looking at human history, I think we can draw the conclusion that yes, societies without slavery provide much better lives to everyone (even the potential slave owners) than those with slavery, and thus we can conclude that we were very likely correct in identifying those desires as ones that should be strongly promoted.
Hi Eneasz
ReplyDeleteRichardW does have a point except I wonder whether he is using it in the wrong fashion by mistake or intent, I do not know.
The slave owner, like the Nazis, Stalinists, Inquisitors of this world all did most likely think that they were justified, that were doing nothing morally wrong and what they were doing was realising seeking to fulfil the more and stronger of their desires.
All were surrounded by peers who agreed and pursued and supported these same ends. It is quite possible, unlikely as it seems to us, that they were in other regards seen by their peer as decent, morally good people. This includes all the values you noted, except they only applied to their in-group.
The out-group did not count whether they were, depending on the in-group: for slave owners and their peers - other citizens: slaves;for Nazis: Jews, Gypsys, Gays, atheists and so on; for Stalinists, the bourgeoisie and quite a few others; for Inquisitors, heretics and witches etc.
The question is were they justified in doing this? Were they justified in having an in- and out-group specified this way? It certainly was to their benefit given the situation at those times.
The salve owner did benefit from their slaves - even if they treated them well. If they stopped owning slave they may not have been able to compete with those who did.
The question becomes asking whether their justifications - whether assumed, considered, held unwittingly and so on. These are all examples of cultural relativism, when considered objectively instead of relatively, there was no grounds for making these distinctions except as to their advantage. For whatever reason they gave, it was their advantage in realising their ends that was the real justification. Their moral were constructed to support and justify these practices as it does in many other societies with double standards.
We have repeatedly found when looking to see if such double standards are rationally justified by the empirical evidence they are not. If there are no grounds for double standards then everyone would generally be better off if these standards are removed. Of course, the ones preventing this are the ones that are benefiting.
The strong and the wealthy like to keep strong and wealthy by keeping the poor and the weak poor and weak.
This all revolves aroudn power and persuasion. The ahve the power and control the means of persuaion through religion and other ideologies.
The problem becomes to classify abuses of such power and persuasion then what remedial action can be taken. We are only looking at the first question for now.
The dispute becomes over what we might classify as abuses of power and persuasion are rejected by those with the relevant power and persuaion. We need objective means free of bias as possible. This is not to assume -as those in power do - that the status quo is correct and it is also not to assume that any alternative is better - such a voilent uprising.
We need a means of evaluating and classifying double standards without introducing others. Too many debates and analysis of these issues seek only to change the parameters of the double standard rather than remove them, to change the parameters against them to be in favour of them.
So the only likely tack to take is
not to assume a double standard - one way or another - and see what make sense then. This requires assuming as little as possible and seekng rational and emprical justification for any apparent bias or exception. Du is such a fraemwork.
Enough for now.
P.S. Accidentally posted before I finished spell checking. Still it is legible enough as it is.
ReplyDeleteFaithlessgod, the reason we keep running into problems is that you are putting forward two mutually inconsistent positions: that DU is both a method for finding the most rational SAE and some sort of moral calculus. This is illustrated by your use of the word "justified" as a moral judgement, even though we agreed at the start of the discussion that we were not talking about morality (yet). And at no point did you subsequently bridge the gap from one to the other. I drew attention to your problematic use of the word "justified" very early in our discussion, but didn't pursue it then. Perhaps I should have done. Anyway, you are now getting impatient (as am I) and I don't think it would be useful to continue. Thank you for an interesting (if frustrating) discussion.
ReplyDeleteEneasz, you and I seem to be on the same page, so I believe we can continue productively. Unfortunately, it's now doubtful that we are talking about the same DU as Luke, Alonzo and Faithlessgod.
ReplyDeleteAs you say, the disagreement between us is over the empirical question of whether espousing a moral calculus that disregards the desires of slaves would actually serve to fulfill slave-owners' desires.
First of all, let me clarify that, when I said a slave-owner probably should (rationally) espouse a moral calculus that disregards the desires of slaves, I didn't mean that slave-owners should simply disregard the desires of slaves. Obviously it's in the interests of slave-owners to keep their slaves healthy and sufficiently content not to rebel. But they can do that out of practical considerations, not because a moral calculus tells them to do so. Also, if I've made my claim too strong by saying "disregards the desires of slaves", try substituting "counts the desires of slaves as less important than those of other people".
There have been many slave-owning societies in the past, which presumably had a moral calculus that accepted the ownership of slaves, and they seemed to have functioned pretty well and to the benefit of the slave-owning class. Of course, we can't say for sure that slavery contributed overall to the fulfillment of their desires, but I don't think it's unreasonable to hold the view that they did. And, if such empirical questions are too difficult to answer, then what practical use is DU?
"that DU is both a method for finding the most rational SAE"No DU is based on what people do anyway, it is just considering this systematically.
ReplyDelete"and some sort of moral calculus."I do not argue that there is some special form of reasoning utilising as a moral calculus. Morality is no special or distinct category of reason. Moral claims can be evaluated just like any other claim on the bases of rational and empirical enquiry. That is what I am saying. On what rational or empirical basis can you imply that that moral claims can must be excluded? This is what I have been waiting for from you.
"This is illustrated by your use of the word "justified" as a moral judgement,Justification is a perfectly good word to use when evaluation justificatory claims of any sort.
Epistemic, biological, psychological, sociological, economic etc.
" even though we agreed at the start of the discussion that we were not talking about morality (yet)."I see no need to talk about morality at all, only to the degree that people are using what they call moral justifications, there no ground for those. However for those it is useful to use moral terms but only as a reaction to their usage.
And at no point did you subsequently bridge the gap from one to the other."There is no gap to bridge because nothing to build a bridge to. You need to justify - that is provide evidence and argument - why you think there is.
I drew attention to your problematic use of the word "justified" very early in our discussion, but didn't pursue it then. Perhaps I should have done."If you want to just play semantic games this can go on forever. I am not interested in that, only in constructive conversation.
There have been many slave-owning societies in the past...they seemed to have functioned pretty well and to the benefit of the slave-owning class... we can't say for sure that slavery contributed overall to the fulfillment of their desires, but I don't think it's unreasonable to hold the view that they did. And, if such empirical questions are too difficult to answer, then what practical use is DU?
ReplyDeleteI actually don't think it's that difficult to answer. Those societies did not function pretty well, they only functioned well enough to survive. And slavery was a disadvantage to the slave-owner as well, relative to how he would have faired in a slavery-free society. I conceed that this is counter-intuitive. Having a lot of slaves certainly feels like an advantage, and seems on the surface to be an advantage. That's why humanity embraced slavery for millenia and only a few centuries ago finally threw it off.
However I dare say that as a species we have advanced at a phenominal rate in the last few centuries, compared to our rate of progress before. I realize there are MANY contributing factors to this, not just the decline of slavery, I am not delusional. However the values that made such progress possible are the same values that made people realize slavery is abhorent. I posit that a society that gladly embraces slavery could not advance at the rate we've been advancing due to the distinctly different values and attitudes that would require. As such, the person today who is not a slave-owner is better off than he would have been even as a slave-owner if society had never rejected slavery, because such a society would be far less advanced. Let's not forget that the average American has a much higher quality of life and average life span than any pre-modern king or emperor ever enjoyed.
Again, I'm not saying this is because slavery was abolished, but rather because the change in values allowed for much stronger progress, and those values are ones that will naturally lead to a rejection of slavery.
This is why "good" things are good. Recall that good = all people generally have many and strong reasons to promote this. All people have reasons to promote these desires for liberty, empathy, etc, because everyone is better off when these are strong desires, as recent history has shown.
This comment has been removed by the author.
ReplyDeleteEneasz, I think we will just have to accept that we have different judgements as to the likely outcome of DU evaluations conducted on behalf of slave-owners.
ReplyDeleteFaithlessgod,
ReplyDeleteI wasn't going to post again, but your last post leads me to believe that we are mostly in agreement on the substance. We just haven't been understanding each other.
You seem to be objecting that the moral claim I suggested for the slave-owner (that slaves should be treated unequally) is not rationally justified. Right?
Well, I agree with that, if properly understood. I wasn't saying that the claim was rationally justified as to truth. My point was that it was rational for the slave owner to make such a claim, since it helps fulfill his desires. If he understands what he is doing, the slave-owner's moral claim is a deception, designed to achieve his desires. But deception may be the rational course of action.
To remind you, I wrote:
< If I'm a slave-owner with a desire to continue reaping the benefits of slavery (and no contrary desires), DU will probably tell me to espouse a moral calculus that disregards the desires of slaves. <
Perhaps to be clearer I should have said "a DU evaluation" instead of just "DU".
As I see it, DU says nothing at all about morality. A DU evaluation just tells me the actions to take to fulfill my desires. The making of moral claims is just one of many types of actions that may be evaluated. And DU does not confer any meaning on such claims. As far as DU is concerned they could just be meaningless utterances. All that matters is the effect they have on the listener.
You wrote:
> Moral claims can be evaluated just like any other claim on the bases of rational and empirical enquiry. That is what I am saying. On what rational or empirical basis can you imply that that moral claims can must be excluded? This is what I have been waiting for from you. <
I see now that our language is ambiguous. It's not clear whether "moral claims can be evaluated" means
(a) the content of a moral claim can be evaluated as to its truth; or
(b) the action of making a moral claim can be evaluated as to its fitness for purpose (to fulfill the agent's desires).
I say that (a) is false and (b) is true. The content of moral claims cannot be evaluated as to truth, because they are not truth-apt propositions. To be able to evaluate them, you would first need to define the meaning of the moral terms used. That was my challenge to Luke and Alonzo, which they attempted and failed. I noticed on your blog a challenge to define moral terms. I thought your challenge was highly pertinent and well put. I assume you yourself do not claim to be able to define moral terms.
I have written a response to your slavery issue here:
ReplyDeleteFor Love of Freedom
"You seem to be objecting that the moral claim I suggested for the slave-owner (that slaves should be treated unequally) is not rationally justified. Right?"
ReplyDeleteCorrect, it is your, not the slave-owner's, "moral" claim that I am seeking a rational justification for.
Well, I agree with that, if properly understood. I wasn't saying that the claim was rationally justified as to truth.So you are making an unjustified "moral" claim? So you have no justification in asserting it, so why assert it, without justification why should anyone listen to you? Lets see if you indicate an answer to this in the rest of your reply:-
"My point was that it was rational for the slave owner to make such a claim, since it helps fulfill his desires".
And is this addressing the wrong point as I think should be quite clear by now.
"If he understands what he is doing, the slave-owner's moral claim is a deception, designed to achieve his desires. But deception may be the rational course of action."
In order for there to be a "deception", there has to be a state of affairs over which one can be deceived. This is what we have been trying to discuss and you ahve been evading, yet you tacitly admit to our point here.
" If I'm a slave-owner with a desire to continue reaping the benefits of slavery (and no contrary desires), DU will probably tell me to espouse a moral calculus that disregards the desires of slaves."
To repeat this is not DU nor an inference you can draw from DU. Where is the justification of this claim?
"Perhaps to be clearer I should have said "a DU evaluation" instead of just "DU"."
You have a strange concept of clarity. This is quite irrelevant.
"As I see it, DU says nothing at all about morality."
Then how about you tell what you mean by "morality" and it better not be a special discipline with special types of reasoning or we will demand a rational justification for asserting a special status for "morality".
" A DU evaluation just tells me the actions to take to fulfill my desires."
Yawn. This is getting boring, this is called practical reasoning.
"The making of moral claims is just one of many types of actions that may be evaluated."
Like any other action - in terms of the desires that brought it about and those it affects.
" And DU does not confer any meaning on such claims."
False, what would it be about a"moral" claim such that it does not confer meaning? A claim is a claim, you check its meaning - the asserted proposition - against reality to see if is justified or not. Just like any other claim.
"As far as DU is concerned they could just be meaningless utterances."
Nothing I have said could imply this, you are getting absurd.
I read your quote from me next to realise I have to repeat my points as you are failing to understand. You say in reply to my quote and presumably much of my response above:
" All that matters is the effect they have on the listener."
False. One can use facts and arguments over beliefs to change beliefs, but desire are immune to such methods. One needs to use social power and persuasion to change desires, the question is the over he use such power and persuasions justified, and that depends on the desires being addressed.
(a) the content of a moral claim can be evaluated as to its truth; or
(b) the action of making a moral claim can be evaluated as to its fitness for purpose (to fulfill the agent's desires).
I say that (a) is false and (b) is true. Both (a) (in a way) and(b) are true but only by not relating moral claims as a special form of reasoning which is what you are doing and why you gave a different answer. You argument to make this distinction is:
"The content of moral claims cannot be evaluated as to truth, because they are not truth-apt propositions. To be able to evaluate them, you would first need to define the meaning of the moral terms used."
"Moral" claims are just as truth-apt as any other claims, not in virtue of making "special" claims but in spite of that. (A)That is if one thinks there is something called intrinsic goodness then one is in error but the statement are still truth apt.
(B)Similarly if one thinks that there is some special semantic convolution not applicable to any other claims, then one is in error but the statements are still truth-apt.
Both A and B are error theories but with different subjects, the first of the agent, the second of the assessor - you here.
Your argument appears to based on committing a type B error, based, presumably, on a hasty generalisation of wanting to avoid type A errors (as an agent). Your mistake is to ascribe a special status to moral claims when there is none required. Unless you can justify this of course.