My answer is, "Obviously not. The agent has a false beliefs that he has reason to drink from the glass."
This seems to be an item of contention among philosophers, and I am trying to figure out why.
Bernard Williams discusses this in his highly influential article on internal and external reasons (Williams, B., 1979. “Internal and External Reasons,” reprinted in Moral Luck, Cambridge: Cambridge University Press, 1981, 101–13).
He began with:
A has a reason to φ iff A has some desire the satisfaction of which will be served by his φ-ing
.
He then replaced 'desires' with 'elements of an agent's subjective motivational set S' for reasons to be discussed in a future post.
He then wrote:
An internal reason statement is falsified by the absence of some appropriate element from S.That is to say, if it is not the case that A has an element in his motivational set that is served by his φ-ing, then he has no (internal) reason to φ.
Williams then discusses a potential objection to this based on the problem of false belief. The example he uses is very similar to the example I often used and mentioned above. I will not put it beyond the realm of possibility that I picked up this example by reading Williams’ article several years ago and forgetting about it.
Anyway, in Williams example:
The agent believes that this stuff is gin, when it is in fact petrol. He wants a gin and tonic. Has he reason, or a reason, to mix this stuff with tonic and drink it?
Answer: No. He falsely believes that he has a reason to mix this stuff with tonic. He does not actually have a reason to do so.
Williams wants to suggest that there is a problem with this answer. The problem comes from the fact that we still explain the agent’s action in terms of desires (or ‘elements of an agent’s subjective motivational set’) and beliefs. The only difference is that a belief is false.
However, that does not alter the nature of the explanation.
The difference between false and true beliefs on an agent’s part cannot alter the form of the explanation which will be appropriate to his actions.
This is true. However, we still need to distinguish between a successful action – one that reaches its goal – and one that does not.
When an airplane crashes, investigators use the same terms to explain the crash that they use for a successful flight - such things as altitude, air speed, thrust, lift, and drag. This gives us no reason to bury the fact that the airplane crashed and to write about the event as if it were a successful flight. It still crashed.
In the case where the agent drinks the petrol, we need to talk about the act as an intentional act - one that came from the agent's beliefs and desires. However, one of the most important facts about this intentional action was its failure. The cause of that failure is false belief. We mark it as a failed act and explain the failure by saying that the agent did not have the reason to mix the stuff with the tonic that he thought he had.
I predict that blurring the distinction between successful and unsuccessful actions – what the agent was aiming at, and what the agent actually got – what the agent actually had reason to do, and what the agent falsely believed he had reason to do.
Williams comes to the same conclusion. Consequently, he writes:
A member of S, D, will not give A a reason for φ-ing if either the existence of D is dependent on false belief, or A’s belief in the relevance of φ-ing is false.
Of course we must distinguish between success and failure in attempts to reach a goal. But this doesn't yet address the question of whether a thirsty person had a reason to reach for a an empty glass which he thought contained water. If there are two empty glasses within reach, one of which a second person told him (incorrectly) had water, it might seem natural for a third person (not hearing about the incorrect telling) to ask the first "what was your reason for reaching for that glass?" After seeing his mistake, the first person might quite naturally reply "I was told it had water in it." Does this reply seem *mistaken*, as an offering of the *reason* for reaching for the glass? If not, then the person did have a reason. Perhaps not a good reason (in some sense), or a sufficiently good reason, or the best possible reason; but a reason nonetheless.
ReplyDeleteA justified but still false belief that one has a reason to perform an action still fails to generate a real reason to perform an action. It is still nothing more than a false (though justified) belief that one has a reason.
ReplyDeleteOur thirsty agent simply has no reason to pick up an empty glass.
Well, I just disagree.
ReplyDeleteBut let me try to say more. First, I wouldn't characterize the false belief as "there is a reason for me to pick up a glass." The false belief is "there is water in the glass." Whether that state of affairs which is the object of the belief is the only possible referent of "a reason to pick up the glass" in this context is precisely the point at issue, and should not be assumed. Obviously if the belief that "there is a reason for me to pick up the glass" is false, then there is no reason to do so, but I never thought otherwise, so insisting upon this point is irrelevant.
Certainly, agents with desires (assuming it is morally acceptable, a qualification we can current ignore since it neither applies here, nor affects the point at hand), should value their performance of whatever activity leads to that results, and for that matter should value any other events out of their control just insofar as they satisfy that desire (all such claims are ceteris paribus, of course). But we can't guarantee that this will happen by magic. How are we going to increase the likelihood that our desires get satisfied, overall? Presumably by responding to the best evidence that we have about how to do so in ways which maximize subjective utility. So we have reasons to so respond.
I also disagree that facts sans evidence thereof constitute reasons, in general; at least, it is often odd to say so. Socrates did not have reasons to believe that water is H2O, or that North America has a south-pointing peninsula at each end; certainly not the same reasons that I have to so believe.
BTW, I found your blog while trying to find a copy of your book _A Better World_; it doesn't seem to be in any libraries. I'm always happy to read more on utilitarianism but rarely buy books these days. From what I gather from here & some other blogs I think I would agree with you on much of ethics, generally. However I am a subjective consequentialist (also called expected value C, prospective C, etc.), for reasons very closely related to the current point.
Are we disagreeing on a matter of substance, or even Bly in what we want to name things?
ReplyDeleteYour notes on the character of the false belief are accepted without amendment.
Your notes on the irrelevance of morality at this point are also accepted without amendment.
The way that I understand your comments here, you are saying that a person must have a reason to do X even if he cannot guarantee that doing X will fulfill his desire.
So, a person has three buttons in front of him. One button will dispense a glass of water. He is thirsty. Does he have a reason to press a button?
We may say that the answer to the question is "yes". He might get a glass of water that will quench his thirst.
But, does he have a reason to press the first button?
The answer to that question is still, "It depends on whether pressing the first button will dispense a glass of water."
I would argue that this best describes our agent's predicament. "I have reason to press one of these three buttons, but I do not know which one I have a reason to push." This does not imply that he has a reason to press either button that will not dispense water. He only has a reason to press the button that will dispense water. But he still has reason to press a button.
Part of the issue is that, in order to say that the agent "has a reason" to pick up the empty glass or to press one of the buttons that will not dispense water, you must assume ignorance. The ignorant person has a reason to press the button; the fully informed person (who knows which button dispenses water) does not. How does acquiring a belief cause a reason to act to disappear? My answer is that it was never there - and acquiring the additional information makes the agent aware of that fact.
I actually don't see the disagreement as one of substance. In my experience, objective consequentialists will still end up saying that we should praise and blame people for, and should form habits of, responding to subjective evidence exactly as subjective consequentialists say that we should (and will say the same things about instrumental actions, mutatis mutandis). But then they want to say that, nevertheless, it is often "wrong" to do those things if they don't turn out well. I find that a confusing way of talking, at variance with ordinary judgments. I grant that some ordinary locutions support the objective perspective, of course, but I think they are less decisive.
ReplyDeleteI also realized at some point that my perspective here is based on a more fundamental idea (which might be one of substance) about what the proper object of a moral judgment is, and indeed what agential activity consists of. The objective judgment makes sense if these are *motor activities*, or *space-time events* which just happen to involve my body. The subjective judgment makes sense if these are *dispositions to respond to evidence with intentional activities*. It seems to me that a judgment of the former is simply not a judgment about what I, as a person, am doing, which is more properly the latter. But a judgment of the latter is simply a judgment about whether my action made sense *as a response to available evidence*--in light of subjective probabilities of outcomes, to be sure, but not simply as a causal factor in whatever actually ends up happening, beyond my capacity to predict.
Many people on both sides discuss this issue as if the decisive point was that a theory must be "action guiding" or provide a "manual for agents," as the subjective view does, while the objective view distinguishes between what should guide an agent's action and their reasons for action (and what is right/wrong). But I think the more basic point is that guiding intentional actions via evidence-responsive dispositions is what I, as an agent, am fundamentally *doing*. If you evaluate me on the basis of some relational predicates--whether this be how close I am to the planet Jupiter, how many red shirts are worn in my city the day after my choice, or whether my choice, in ways I could not have predicted in advance, led to an optimal result, you're evaluating some more or less interesting aspect of my environment. Not *me*, as an agent.
" a person must have a reason to do X even if he cannot guarantee that doing X will fulfill his desire. "
Did you mean *might*? Then yes; more precisely, what counts as a reason to do X is evidence that X is more likely than non-X to satisfy the desire.
I see no puzzle at all in the idea that acquiring information changes what you have a reason to do. Indeed, if it didn't, there would be little point in seeking it out. It can give us new reasons, and change the reasons we used to have. Once I learn that button B will not dispense water, I lose any reason I had to, say, guess at B; but I did have such a reason earlier when it had some probability of success.
Comment continued.
ReplyDeleteSuppose I don't know which button will dispense water, but no harm comes from guessing, it's not a one-shot thing and I can always retry. Do you deny I have a reason to try all three buttons, in whatever order is convenient? Am I irrational, acting unreasonably, if I do so, instead of just pressing, say, C (the only water-dispensing button)?
Granted, once I learn it's C, I might think "I now learned that I should have started with C." But actually I think that's incorrect. The new thing I learned, rather, is that if I started with C then I would have saved a few seconds of time. I did not learn that always pressing C in such situations is a good strategy, is rational, etc. I have no reason to change my future dispositions in cases where I again lack information about which button dispenses water.
And if in advance I say "I wish I knew which button I had reason to press," I think this is strictly inaccurate. What I now have reason to do is learn more information, which is perhaps best done by pressing each button in a systematic order. What I wish I knew in advance is not what I have most reason to do; that is perfectly clear: press each one to see which one gives water. Rather, what I wish I knew in advance is which button dispenses water. Again, by speaking as if this is identical to "which button I had reason to press" you are presupposing that the objective view is correct.
This is even more obvious in cases where there is a penalty for wrong guesses, as in the standard drug (Jackson) or mineshafts (Parfit) cases; if option A is relatively safe, known to impose only modest harm, while one of B or C will cause enormous harm the other removing all harm but you don't know which, you have decisive reason to A; certainly not to pick B or C at random, hoping for the best. If we later learn C was the best, then you might wish you had chosen C instead, in the sense that you now know that the motor activities/space-time events involving your body's C-ing was optimal. You have not learned that the agential disposition to choose C in these or analogous cases was a good one to have, or that the facts given constituted reasons for you to have chosen C in analogous evidentiary cases.
At least, these are my considerations for thinking so and hence adopting the subjective view of reasons, which you earlier said you found puzzling that anyone debated. Hope this is shedding some light; I enjoy discussing the topic.
At this point, we are not talking about blaming or judging people. We are talking about practical reasoning. It may be useful to imagine one (thirsty) person alone in the universe and what she does and does not have reason to do.
ReplyDeleteIn addition, I am not yet clear about the relevance of this discussion to the objective/subjective distinction you are making - particularly if that distinction itself is linked to blaming or judging.
On the subject of practical reasoning, when you talk about testing the buttons, you have returned to the realm of means-ends rationality where the talk of reasons makes sense. The reason to test the buttons is the same as the reason to query a computer or to even read the labels on each button to determine which dispenses water. This, in turn has the same means-ends rationality of turning the machine on so that the machine will dispense water when the correct button is bushed.
Still, the reasons that exist are independent of belief.
Imagine that our agent has a friend who knows which button to push and is willing to tell the agent if asked. The agent has a reason to ask this person which button to push. He has this reason even if he does not know that this person has the right answer and would be willing to tell. Discover of these facts would reveal a reason to call this person that he was not previously aware of.
Alternatively, if the agent falsely believes that this person has the correct information and is willing to tell him, he still has no reason to call this person – a fact that he will become aware of when he learns the truth of the matter.
It is true that these types of cases are dependent on belief in a sense. Or, more precisely, they are dependent on ignorance. If a person is ignorant of certain relevant facts then they have a reason to pursue that knowledge – a reason dependent on their ignorance and that goes away once the ignorance itself vanishes. However, this is still a case where beliefs reveal standard means-ends reasons. It is not a case where beliefs create reasons that did not exist or destroy reasons that did exist – at least, not in the sense that I am writing about in these posts.
Again, the relevance to this to objective versus subjective utilitarianism is not clear to me at this point. I would have praise and condemnation directed at molding desires - and the desires that an agent has (and, thus, the direction they are to be molded) would be determined in part by what we knew about the beliefs an agent actually has. A change in beliefs can, in fact, result in a change in moral judgment.
Well, as noted above, I don’t think the discussion is relevant to blaming or praising people; both objectivists and subjectivists about (moral or instrumental) reasons agree that this is a function of their objective evidence. One of points was, however, about evaluation (moral or instrumental) of an action; either this is, or is not, a function of how well they responded to their reasons for action. If it is, then you end up saying that someone is wrong to not press a button which (unbeknownst to them) gives the optimal result, which I think is weird; if it is not, then "reasons" are not very interesting, and I think we should instead direct our interest to the factors for evaluating agents as right/wrong, good/bad, correct/incorrect, (ir)rational, etc. You could maintain the latter position, but only at the cost of making reasons boring; and also I think with some change to how we use the word.
ReplyDeleteBut we can set aside the evaluation question if you like and return to the main point of contention (my apologizes for drifting slightly into evaluation). I still think you have a problem. You said that “The agent has a reason to ask this person which button to push.” Why? In your view, desires, and desires alone (I think), give reasons. Then I grant that *if the agent wants to know which button to push*, then given his ignorance, this desire is currently unsatisfied, so he has a reason to ask the person who knows. But what if he doesn’t want to know this, but just wants a drink of water? Say he hasn’t the slightest independent desire to find out how to get the water, but a strong desire to drink the water. I don’t see how, under your system, this can give him any reason whatsoever to ask for this information. It's not an instrumental reason; the best way to satisfy his desire is to press button C, not to ask anyone which button to press. A desire to know about the buttons, or a reason to acquire such a desire, would have to be sui generis, completely independent of the desire to quench his thirst and the reasons this gives him. He apparently only has reason to, say, press button C. Anything else is a complete waste of time, an irrelevant interference on the way to pressing C.
But that seems wrong.
Here's yet another way to see my point. Suppose that our agent doesn't know which button to press, but knows that just one of three other agents does know this: X, Y, or Z. Agent S knows *which* of these three agents knows which button to press (S does not himself know which button delivers water). Suppose Z is the one who knows. Does our original agent have reason to ask S which other agent to ask about the buttons? According to you: yes...and no. Yes, insofar as you think the agent has reason to acquire missing instrumental facts by asking someone who knows (S), which you said he did. But no, insofar as the agent should just ask the person who knows, whether or not he knows that that agent knows; he should then skip S and go right to Z--as you also said. Or skip Z as well and just press button C, which delivers the water. We can of course extend the higher-order move about agents who know which agents know that... indefinitely, but I trust the point is clear. Either 1) we just have reason to do whatever motor actions will best satisfy our desires, period, in which case we have no reason to gather more instrumental facts (whether from other agents or in any other way), and should just press C; or 2) we do have reasons to gather such information, merely in light of our ignorance (and not of any other incapacity to select the optimal motor movements); but then we sometimes have reason to gather second-, third-, etc. order information, again either from agents or non-agents. In no case will you have, in my last scenario, a reason to go right to, say, agent Z (you should instead go to S); and if you don't know that S knows which agent to ask, you should do whatever will tell you to ask S, etc. Applied consistently, the latter move (I think) leads to subjectivism about reasons, i.e., makes them a function of your evidence. Once you admit that they are sometimes a factor, I don't see how you can avoid apply this more thoroughly.
ReplyDeleteI think you are currently applying objective/subjective considerations inconsistently; you think subjective ignorance is a reason to seek the person who objectively knows the answer. But why limit this to seeking out other agents? It's also an objective fact that just pressing C will both give our agent the water he seeks, *and* relieve his ignorance about which button to press. So he can kill two birds with one stone by pressing C, and has no reason to ask Z, S, etc. (Even if asking is easier than pressing, or all buttons stop working if he presses the wrong one so this becomes costly, etc.) Reasons to ask other agents, or gather other evidence, instead of just doing the one instrumentally optimal action, can only be based on a subjective conception of reasons, I think.
You write, "He apparently only has reason to, say, press button C."
ReplyDeleteActually, your objection, taken to its logical conclusion, would deny even a reason to press button C - for that does not quench the agent's thirst. It is a means only. The agent would not even have a reason to raise the cup to his lips draw the water into his mouth, or swallow- all means to an end.
Of course, that's not my view. If the agent is thirsty, then she has reason to drink a glass of water. If she has reason to drink a glass of water, then she has reason to Rees Button C (assuming that Button C dispenses a glass of water). In that she has reason to press Button C she has reason to learn that Button C is the button that dispenses water.
Desires plus facts create reasons. True beliefs reveal reasons, they do not create reasons.
Technically, you are not talking about beliefs creating reasons, you are talking about ignorance creating reasons - which I have not disputed. Ignorance creates standard means-ends reasons to remove ignorance (in the same way that a door between the agent and the button creates a reason to open the door).
If the agent can only press one button once in an attempt to get a glass of water, does he have a reason to press Button A? No. He has a reason to take a guess but, if he presses Button A, this will reveal the fact that he had no reason to press Button A.the newly acquired information reveals reasons that existed (or did not exist).
On the objective/Subjective distinction . . . I have not used those terms other than to say that I do not know how they apply. I believe that the terms are ambiguous and cause more problems then they are worth.
Take, for example, the phrase "I prefer butterscotch to chocolate." Subjective or objective? It expresses a preference, so it must be subjective. Yet, the proposition is objectively true. It refers to a fact in the world that is used to explain and predict actual observable events - e.g., my disposition to select a bowl of butterscotch pudding when offered a choice between butterscotch and chocolate. My preference for butterscotch is just as real as my height, weight, age, blood pressure, body temperature.it is an objective, knowable fact about me.
This is why I am uncertain as to the implications of this to your "objective/subjective" concerns. The terms are too poorly defined and used in too many different ways to allow for a clear answer.
NOTE: There are quite a few people who would call this account of reasons "subjective". It states that all reasons are grounded ultimately on desires, and denies the existence of any desire-independent "objective" reasons for action - reasons that would exist independent of any mental states.
ReplyDeleteIn fact, now that I think on it a bit, it may be most accurate to say that, in our dispute, I am defending a desire-subjective theory of reasons that holds that all reasons for action are desire-subjective and none are belief-subjective. Whereas you wish to argue that at least some reasons are belief-subjective.
ReplyDeleteNeither of us are arguing for any type of truly objective reasons - reasons that are built into the very fabric of the universe and which exist completely independent of mental states. The claim that every being has a reason to survive quite independent of anything they believe or desire would be an example of a truly "objective" reason - and I would argue that there are no truly objective reasons.
“your objection, taken to its logical conclusion, would deny even a reason to press button C”
ReplyDeleteNo no no; of course, I don’t dispute that your view is that we have reason to do whatever is an effective means to your ends; but you haven’t explained why asking for advice counts for the latter over just pressing C.
“Desires plus facts create reasons. True beliefs reveal reasons, they do not create reasons.”
Why would G have a reason to reveal his other reasons, when he already has reasons to just press C? Incidentally the beliefs vs. ignorance issue is a red herring; my position is that evidentiary states (involving both evidence and lack thereof) plus desires (or something like them; pro-attitudes, values, etc.) create reasons; changing evidentiary states in any direction (more or less ignorance/belief) could change one’s reasons. For that matter, ignorance is a sort of belief: G believes that A (and B, and C) /might/ deliver water; indeed, any belief which is not completely correct is a kind of partial ignorance, belief and ignorance are not contraries. This is what gives him reason to try each button in turn, if advice is not available; your view inconsistently says some of his evidentiary states create such reasons, while others do not.
“Ignorance creates standard means-ends reasons to remove ignorance (in the same way that a door between the agent and the button creates a reason to open the door).”
No--these are not at all alike! You could simply insist upon this additional condition for generating reasons by arbitrary fiat, but it makes your view strange, and you certainly can’t claim instrumental sanction for it. Again, if G is not interested in buttons, relieving ignorance in general, or learning about reasons as such, he only has reason to C, not to ask for advice. And even if he somehow did want these things as well, he could get them too by just pressing C. Opening a door is a necessary instrumental means to reaching the button to get the water, as is pressing C; asking for advice is not, any more than is setting up a complex Rube Goldberg machine to press the button.
“If the agent can only press one button once in an attempt to get a glass of water, does he have a reason to press Button A? No. He has a reason to take a guess…”
Why on earth does he have a reason to guess? Guessing gives a 1/3 chance of getting the water; pressing C guarantees it. Guessing means pressing A, B, or C; but you think he has no reasons to press A, or presumably B; why does he have reason to do anything but just press C, simpliciter? This again is a point at which you’ve perhaps inadvertently revealed the inconsistency of your position. If you really think he has reason to guess, this reveals that you think his false belief that A might deliver water does after all give him some reason to press A.
“On the objective/Subjective distinction . . . I have not used those terms [and] do not know how they apply.”
ReplyDeleteThese terms are widely used to qualify the basis of one’s reasoning/reasons (or evaluation of your actions) for satisfying the ends you already have on some other basis. Subjective means that what is right (your duty, rational, gives you reason to do something, etc.) is a function of your beliefs or evidentiary state about what will satisfy certain ends; objective means they are functions of the actual facts about what will do so. This is orthogonal to the internalist/externalist distinction about ends; reasons to act given by one’s preference for butterscotch are internal. But whether or not you have a reason to then get butterscotch still leaves open the question of whether you should, ought, have reason to, etc., walk to the ice cream stand that says “butterscotch for sale”—when in fact they just ran out and there’s another one you can’t see around the corner that has some; this is a S/O question.
I take you to be an objectivist about reasons for action, which, granted, does not mean you must be an objectivist about moral ought, duty, rationality, etc…though as noted, I think such mixed views are strange and confusing. But again, these are derivative concerns which I mentioned just to frame my interest in the issue at hand, and I apologize for raising them insofar as they got you/us off track a little bit.
Just read your last follow-up; I think your assessment is correct, but in standard philosophical terminology this would be described as saying we are both internalists about which ends give us reasons for action, but I believe that subjective evidence about instrumental relationships gives us reasons for how to try to satisfy those ends, while you believe that only objective facts about instrumental relationships do so. (Incidentally, I am close enough to Kantian rationalism about morality to believe that internalism--even when combined with subjectivism about evidence--generates some objective ultimate moral principles, which are derivable from any starting ends whatsoever. But this is a complete red herring with regards to the current issue, except to note that I think that neither position leads to the relativism or ethical subjectivism critics of either might sometimes think they lead to. But again, we should set this aside while pursuing the issue about reasons as such.)
I just realized that in my editing of one comment I deleted a stipulation which was necessary for making full sense of my remarks: I suggested that we call the agent in question "G" [for a-G-ent], just for convenience. Sorry for any confusion.
ReplyDeleteAnd your confusion over the use of "subjective" is understandable; I frankly can't account for why the term is used (with some contention, but it's far from idiosyncratic) to refer to a view about reasons for/valuation of selecting means to ends, but "internal/external" is used for a parallel distinction about reasons for/valuation of ends. Just flukes of philosophical history, I suppose. I don't entirely like the term subjective, especially as it might be confused with ethical subjectivism, but it is in use, especially in distinguishing two forms of consequentialism. Worse, subjectivism in moral or instrumental reasoning is used to describe both the view that correctness is a function of your beliefs, and a function of your evidence (I strongly disagree w/ the former). Perhaps one could instead simply speak of in/ex-ternalism about means, as distinguished from I/E about ends; then my position is I-means + I-ends, yours is E-means + I-ends; one would still have to further distinguish belief-I from evidence-I about means. That might even be slightly clearer, except that I'm not sure if anyone uses the terms this way. This is one of many areas where contemporary terminology could be more standardized and perspicuous than it is.
Thinking it over further, /if/ one were to separate reasons for action from what one ought to do (which I think you have neither said you do, nor denied), then my earlier examples are slightly less compelling. (I apologized earlier for raising this question, but maybe it is very relevant after all.) For you /could/ say that while G has reason to press C (and some reason to ask Z which button to press, and S which agent to ask, etc., even if first is the strongest reason), while denying that this has anything to do with what G should do, making the latter a function of his actual evidence. Then you could say if he knows only that Z knows about the buttons, he should ask Z; if he only knows that S knows about the agents, he should ask S (then Z, then press C). I am primarily concerned with what agents ought to do (= what it is most rational to do, how we should want well-functioning agents to behave, etc.) I find it convenient to express this in terms of reasons; but if someone's view is that reasons are something else, but also unrelated to oughts, and agrees with me roughly about the latter, then I still get 95% of what I want, and just have to talk more carefully to be understood by such a person.
But I still think that it is strange to say that Socrates had reasons to believe that water is H2O, stranger still to say that he had the same reasons for this that I do. This is not the ordinary way to use this term, and it takes the grip of a theory to tell us to use the term as you & other objectivists about reasons would have us use them. Good theories can sometimes compel us to change usage, but I haven't seen this demonstrated yet; and I would like to hope that such examples could at least get you to see why this issue is controversial. Furthermore, if reasons for action are not related to what we ought to do, then they aren't very interesting. I prefer a view of reasons more consonant with ordinary language, and more relevant to moral and instrumental evaluation of how well agents are doing their jobs. Again, you haven't stated your views on these relationships; but I'm suggesting a charitable interpretation. Under a less charitable interpretation, you think G is irrational for asking for advice when they have most reason to simply press G; and wrong to do so, if there is some moral reason to get the water (or deliver a drug, or save the miners) as quickly as possible. But surely that's not correct.
Correction: I meant "simply press C" three lines from the bottom of my last message.
ReplyDeleteGiven what you said about G's ignorance giving him reason to alleviate the same, in addition to pre-existing reasons to press C, you must still concede the following: his reason to ask S which agent to ask about the buttons is *weaker* than his reason to just ask Z about the buttons, which in turn is *weaker* than his reason to just press C. I would reverse this: he has most reason to ask S (assuming he knows S knows about XYZ), which will give him a reason to ask Z, which then gives him a reason to press C. To the extent he had any reasons to do the latter two, as modes of guessing, I say these are weaker than the reason to ask S, at least if guessing is costly.
ReplyDeleteBut if you say, "look, given his ignorance, he ought not go around pressing buttons or asking strangers at random, he *should*, and would be most rational to, ask S and proceed from there; what he has reason to do has precious little to do with his obligations as a partially-ignorant but morally or instrumentally rational agent" then we are complete agreement on obligations/rationality. Further, we would agree that what people ought/are rational to do is a function of what I call, and you refuse to call, their reasons. Which seem to make my reasons more significant than yours in directing action. Granted, what I call reasons tell us to do the best job we can to identify what you call reasons, but it is the former and not the latter that make us do things and should make us do things. Frankly, if we agree on that, the rest is just terminological disputation. If we don't agree on that, though--if, say, you think that it is wrong/irrational to ever do anything except insofar as you have (most) reason to do so, then your view has more serious problems, and I would like to know if anything can be said in favor of gathering more instrumental information except that agents sometimes have weak reasons to do so, reasons which are always weaker than their reasons to do something else instead.
On the model that "desires + facts create reasons", I have already agreed that it is true that, in some circumstances, the relevant "facts" are belief states. Thus, if an agent does not know that pressing C will dispense a glass of water has a reason to find out - a reason that vanishes when the agent does find out.
ReplyDeleteYou assert that there is a difference, but see no reason for that claim. A person who does not know that pressing button C will dispense a glass of water has a reason to find out (so that he can then press button C to get a glass of water).
One of the ways to find out, as you mentioned, is by pressing buttons to see which ones will dispense a glass of water. In this case, as I have said, a person has a standard means-ends reason to press buttons A and B - as a way of discovering the fact that it is by pressing C one can get a glass of water. This is no different from the means-ends reason to ask others - another way to learn which button to push to that she can get a glass of water.
She may not KNOW that he has a reason to ask others, but this is no different from the fact that she may not KNOW that she has a reason to press Button C.
Of course, whether an agent knows what reasons he has (or not) is dependant on belief. This is axiomatic. I am certainly not going to question that beliefs about reasons vary with beliefs about reasons.
The reasons that an agent has and the reasons an agent believes he has are different things - in the same way that the biological descendants one has may be different from the set of biological descendants one thinks one has.
Some of your comments seem to be suggesting that you are interpreting my claims as saying that beliefs are not a necessary cause of action. When our agent presses Button C, this will be because of some combination of beliefs and desires. Change the beliefs, and we change what the agent will do.
Beliefs are reasons for action in this sense - but it is the same sense in which earthquakes are reasons for tsunamis. This causal role for beliefs means that our thirsty agent has a reason to acquire the beliefs that will cause the actions that will quench her thirst. Still, the causal relationship between the belief and pressing the button is the same as the causal relationship between pressing the button and dispensing the glass of water. Desires plus facts generate reasons - and the causal power of belief is one of those relevant facts.
In this case, beliefs are creating reasons in the same way that the causal relationship between pressing Button C and getting a glass of water creates reasons. Acquiring a belief about pressing button C is as necessary as opening the door. Having more than one way of acquiring a true belief does not imply that the agent is "stuck" any more than having more than one way to open the door (or having more than one door to the room with the button) means that the agent is stuck.
You write: "But I still think that it is strange to say that Socrates had reasons to believe that water is H2O, stranger still to say that he had the same reasons for this that I do."
ReplyDeleteSo do I . . . but epistemic reasons to believe are different from practical reasons to believe.
Imagine a group of people in a survival situation. If they believe that they have a 5% chance of success, then they have a 5% chance of success. If, on the other hand, they believe they have a 50% chance of success, then they have a 10% chance of success.
In this case, the agents have a reason to believe that they have a 50% chance of success. However, this "reason to believe that they have a 50% chance of success" is a practical reason to believe - not an epistemic reason to believe. Practical reasons to believe look at the usefulness of a belief - and false beliefs can be useful just as true beliefs can be dangerous. Epistemic reasons to believe look at the truth of a belief.
If we are going to start talking about reasons to believe (as distinct from reasons to do or reasons for action), we are changing the subject.
You state that you like a system that is more in line with common usage. I think, then, that such a system is clearly going to have to distinguish between practical ought and epistemic ought and allow that some things can be true of one that are not true of the other.
This comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDelete"but epistemic reasons to believe are different from practical reasons to believe"
ReplyDeleteNeither of us were ever talking about practical reasons to believe. Obviously everything you've said is consistent with the view that Socrates lacks the practical reasons I have to believe that water is H2O (it won't do him much good in his context), or that the William Jamean-group has practical reasons for overconfidence. What I object to is the idea that Socrates has the same epistemic reasons I have to believe that water is H2O.
Just realized my added "clarification" was wrong, and reverted to the original version. :-(
ReplyDelete“A person who does not know that pressing button C will dispense a glass of water has a reason to find out (so that he can then press button C to get a glass of water). … Some of your comments seem to be suggesting that you are interpreting my claims as saying that beliefs are not a necessary cause of action. When our agent presses Button C, this will be because of some combination of beliefs and desires. Change the beliefs, and we change what the agent will do. … Acquiring a belief about pressing button C is as necessary as opening the door.”
Ah, now I’m starting to see a more fundamental disagreement—and, I think, a more fundamental confusion on your part, regarding what counts as an action. If “pressing C” is a complete agential action, then you don’t need a belief + desire to so act; or at any rate, none connected with the desire for water or beliefs relevant thereto. The rather trivial belief that there is a button C that G can press it, and the trivial desire to press it, is sufficient to motivate G to do so.
Of course, this will not be done under the description of “pressing a water-delivering button,” or with the intention to get water. But you don’t need that intention, or self-description of your action, to get the water. You just need to try to press C; the button doesn’t care why you did so.
So when you say that C’s capacity to deliver water constitutes G’s reason for action, that can only mean: G has a reason to press C. Not to press C thinking/believing, etc. that it will deliver water, nor intending thereby to get water. Just to press C, period—because *that* is what will deliver water. Neither the belief that C will deliver water, nor the intention to press it in order to get water, is in any sense a necessary cause of its delivering water. Once you define reasons as simply those possibly unknown instrumental facts about which world-states W will deliver the goods, then you reduce the actions we have reasons to do down to motor activities producing W, which don’t require correct instrumental beliefs about our ends. Only if you say that a fully-understood human, agential action requires intention does the belief “C will deliver water” become instrumentally necessary. But then you’d have to say that G doesn’t have reason to just “press C” simpliciter, but reason to learn which button (C, as it happens) will deliver water, and then press C /with the intention of thereby getting water/. I will completely agree that G has reason to do the latter, once he can. But this is a different action than just “pressing C,” which does not require the same kinds of beliefs at all.
Again, perhaps you could say that we have reasons to produce world-states W that will satisfy our ends, and in addition to those reasons we also have yet reasons to do so /intentionally/, with some understanding of the instrumental relationships involved. But the reasons for the latter have no instrumental justification in terms of reasons to act, if adequate (indeed, decisive) reasons to act are already given by the objective facts of the situation. Such reasons to act intentionally, knowledgeably, must always be sui generis, and far weaker than the mere W-producing ones. Acting intentionally is most certainly not a necessary means for acting for W-producing reasons, which are sufficient for satisfying the original ends.
Well, we WERE talking about reasons for action and, by extension, practical reasons to believe. We were not JUST talking about pushing buttons.
ReplyDeleteBut we were not (until now) talking about epistemic ought.
You say that you would find it surprising to hold that Socrates had the same reasons to believe that water was H2O as you did. I would find that surprising as well. But, it's a different topic.
In fact, to start talking about what Socrates had epistemic reasons to believe would be far off topic.
Now, you mentioned a "fundamental disagreement" and a "fundamental confusion on my part." However, in what you wrote that followed, I failed to identify either.
I asked at the start as to whether our disagreement was semantic or substantive. It seems your point here is to claim that our disagreement was semantic. Whereas I take "reason for action" to be reducible to "that which makes the action worth doing", you take "reason for action" to be the equivalent to "action for (a given) reason". Our dispute has reached the conclusion that what is true about the purpose for action is not true of action for a particular purpose.
My claim is that beliefs contribute nothing to the worth of the action - that beliefs do not create reasons to perform an action but merely reveal the reasons that already exist.
Your claim is that beliefs are still an integral part of an agent performing an action for that reason - that the agent cannot perform the action for that reason if the agent does not have particular beliefs.
I am certainly going to have to agree that the agent cannot press the button FOR THE PURPOSE OF GETTING A GLASS OF WATER without having certain beliefs about the efficacy of getting a glass of water by pressing the button.
But it is still the case that those beliefs contribute nothing to the value of pressing the button. They do not create additional reasons to press the button beyond that of getting a glass of water.
It will be an interesting exercise on my part to go through what I have read recently and try to understand it as a discourse on what it means to perform an action for a particular reason - as opposed to my standard interpretation of what it is that gives the action value.
However, I think you might be a bit rash to rush to the conclusion that I am the one that is confused. Under your interpretation, I would expect to find frequent passages to the effect of, "This is certainly a case in which a person performed act A, but not one in which he performed act A for reason R". At this point, I do not recall this even being mentioned.
Hm, you're right that the Socrates point was a bad example; it would be if you had said that all facts are, simply in virtue of being true, equally reasons to believe in those facts. But you did not say that, and that is not required to maintain your position. I understand the latter to be that agents always have reasons to act in ways that actually satisfy their desires, and (for some derivative reason) to have correct beliefs about which actions will satisfy their desires. This is not strictly speaking a purely theoretical reason for belief; it is a kind of practical reason for belief, but not in the narrow sense of a Jamesean practical reason, where possession of some (possibly false belief) in itself changes certain probabilities of desired outcomes. Rather, we are discussing a broader class of practical reasons for belief, where the belief is about instrumental facts which do not change because the belief is held. Does that classification make more sense? I was confused in my earlier reply, and now see that you were distinguishing not simply between T and P beliefs, but between the general practical beliefs we were talking about, and both pure T and Jamesean-P beliefs.
ReplyDeleteA better example would be, Archimedes had the same reason I have to believe that steam engines with governors are reliable sources of steady power. Such a belief could have greatly satisfied many of his desires, as the belief could have been fairly quickly translated into practical devices. The fact that he had no evidence available that could have easily generated this belief is a non-factor for you. I therefore say he had no such reason (or only very weak reasons); you would say he did. Likewise you would say I have reasons to write down the shortest possible description of how to create a working fusion reactor, and mail it to CERN, because that would create great outcomes.
Back to the buttons. You say "it is still the case that those beliefs contribute nothing to the value of pressing the button." Well, sure; I agree they don't change the value of the outcome. But then, why would you think that G has a reason to ask Z which button to press, or asking S who to ask? Given just the end of getting water, you must think that G has more reason to press C, simpliciter, than to ask Z which button to press. What favors asking Z, then, over, say, building a Rube Goldberg machine to press C? Either one just gets in the way of satisfying the desire more directly. You could just say agents always have reason to learn how to satisfy their goals (whether they think so or not, or need to learn this to satisfy them), but then you'll have become one of the despised externalists. You've said nothing to suggest that you want to go that way; but what else could possibly be said in favor of asking Z, as opposed to just pressing C?
ReplyDeleteI would not expect to find "frequent passages" referencing the confusion earlier than your remarks, since even there it is implicit, revealing hidden and certainly under-explored assumptions. The argument there implicitly equivocates between treating the relevant action as "press C" and "press C with the knowledge/intention regarding its producing water." When you say that only the actually efficacious means of satisfying a desire (and not beliefs thereof) give reasons for action, this only makes sense under the first understanding; when you say that G has a reason to learn through inquiry about C's power, because acquiring the correct belief about this is a necessary means for producing the right action, this only makes sense under the second understanding. Whether this equivocation underlies any remarks you made prior to this time, I don't know; I rather suspect not, and that this has only just now come to the surface, though I could be wrong. But I'm more interested in exploring the argument at hand than doing archaeological digs through past remarks. (And I do find the discussion very interesting, as I'm hoping you do as well.)
"Our dispute has reached the conclusion that what is true about the purpose for action is not true of action for a particular purpose."
That sounds right. I think reasons for action involve the latter; am I right to say that you think that both generate reasons for action, for at least semi-independent reasons? Then you can drop the confused idea that correct beliefs are instrumentally needed to do whatever actually satisfies your desires (as opposed to satisfying them intentionally). Then you have two distinct ideas about what generates reasons for action; E.g, G has reason to both 1) press C, and 2) to learn which button delivers water so as to press the correct button intentionally, where 2 is not instrumentally related to (1) but is an independent basis for having reasons to act. That's not blatantly wrong or self-contradictory, at least.
And then I would even be happy to add: I consider the two bases of reasons to distinguish two different kinds of reasons, or things which have been called 'reasons': objective and subjective ones, respectively. Evaluations of agents as morally virtuous, dutiful, etc., or even as instrumentally well-functioning, i.e. rational, are functions of the latter, which are thus more interesting and relevant to our practical lives. As long as that distinction is recognized, I'm not quite as concerned about what we call them.
ReplyDeleteIncidentally, I still think you're vulnerable on the Socrates/Archimedes point, as forming a belief, or being disposed to believe certain things, are things we can choose, and hence actions of a sort. Or at least they can be caused by other actions we undertake. So if Archimedes had the desire to believe true things about his world, as presumably he did, he could satisfy this in part by believing that water is H2O, and that North America has south-pointing peninsulas (along w/ whatever else he needs to believe to believe those things--beliefs about atoms, continents, etc.) So in your view, he has practical reasons to have these theoretical beliefs, which I find an odd thing to say.
ReplyDeleteBut this is all quite incidental to the other points I've made about the two different bases you seem to have admitted to having for our having reasons (e.g., to press C, and to ask Z about the buttons), which gets more to the heart of our disagreement, and which again I hope helps you see why the matter is at least controversial.
Well . . .
ReplyDeleteI would not say that Archemedes has the same reasons as you . . . since that would imply having the same desires . . . which is not likely to be the case.
There is also an issue of impossibility that I have not puzzled completely through.
“Ought” implies “can”, which means “cannot” implies “it is not the case that one ought.” Impossibility is obviously a relevant consideration in “ought” judgments. Is it also a consideration in “reason to act” judgments? A relevant difference is that “ought” tends to be an all-things-considered judgment, and “reasons to act” are not. The impossibility standard may show up as a part of “all things considered”.
Speaking from purely practical considerations, it certainly makes no sense to talk about a person having a reason to do that which is impossible. Which means that the best possible answer to the question, “Does a person have a reason to do that which is impossible” is neither “yes” nor “no” but “What possible difference could it make?” Language is an invention – a tool – and there is no practical reason to invent a language for having reasons to do the impossible.
In fact, this suggests an practical reason to invent a language where "has a reason" implies "can" and "cannot" implies "does not have a reason". This would tell the native speaker of the language, "Do not turn your attention to the impossible - there is nothing for you to find there."
However, it is still going to be the case that a person will have reasons to do things that they do not know about, because they are not aware of the relationship between doing those things and their desires.
I have avoided the invitation to discuss better and worse reasons to this point because I would fear it would muddy the waters. In general, we are operating on a number of desires (including an aversion to work or effort) that give us reason to fulfill our desires efficiently (so that we can turn our remaining resources to the next desire). Some of our desires also have a time component. It is not only the case that I wish this pain to stop – I prefer that it stop sooner rather than later.
Given these facts, some options fulfill our desires better than others. Faced with an option to determine which button to push to get a glass of water, it is probably better to simply press the three buttons and find out, as opposed to going through the effort of asking somebody. Probabilities of success also influence our reasons for action. 250,000,000 to 1 odds against picking the correct lottery numbers significantly decreases the strength of the reasons one has to purchase a lottery ticket.
Well, insofar as Archimedes and I both wanted to believe true things, we have that same reason to believe that water is H2O ( = this desire + the truth of the matter). We may of course have had other desires and hence reasons on top of that one.
ReplyDeleteThe issue of impossibility, which we touched on in noting the distinction between "pressing C" and "pressing C with the belief/intention of its thereby delivering water" is indeed a key basis of the discussion, in the philosophical literature, on evidence-subjective reasons for action. If you pursue this thought further--and the relevant literature--you may see why the subjective view is taken seriously, even if you yourself aren't convinced.
The subjective view makes sense if you consider agents to be essentially systems which respond to evidence and desires with intentional activity. I think most of our language and thought treats us as such systems. Certainly some locutions describe us just as event-producing systems, but this is almost certainly an abbreviated statement for the former. We often omit references to our antecedent beliefs/evidence and desires when describing what we are doing or could do, not because they aren't important, even essential, but simply because they are so obvious that they go without saying. I fear that some people take such locations too seriously, and this feeds intuitions about the objective conception of reasons. We say "I picked the right numbers/button/etc." with pride as if this was something *we* chose, but this is just self-indulgent feel-good fantasy when our antecedent evidence was limited. This was not what we did; what we did was take a more or less risky guess, which at best increased the probability of the result. The winning is attributable to fortuitous events in world, not to us. I no more had reason to "pick the right number/button" than I had a reason for the money to fall on my head at random; in neither case was the fortuitous result my *action*, something I *did* as an agent, valuable though the outcome is in either case. Likewise I may think "I pressed A; no water, wrong choice I guess." Well, wrong choice if described "pressing button A to get water"--but that was never what I did, indeed it was impossible to do that (or "press C /to get water/", for that matter) given my limited knowledge. But it was the right choice as: "gather more information about the buttons, by trying each in turn." Given limited information, is it impossible to do the former; but not only possible, but advisable (indeed, in the long-term, *valuable*, to do the latter, to generally be the kind of intentional system that does the latter).
Earlier you spoke as if the value of an outcome alone (in terms of desire-satisfaction) gives reason to do whatever causes it to happen. But if you think of the comparative value of being a system which responds to evidence in certain ways, as opposed to one which ignores the evidence, acts randomly, systematically ignores certain kinds of relevant data, etc., you may see that, at least over the long run (which is the only basis on which we can evaluate a standing disposition) it is valuable to be--it is rational to value oneself being--the first kind of system. This is true even if particular instances of acting on that evidence-responsive disposition don't work out. Obviously a pattern of disappointments can itself constitute or deliver new evidence about to identify an even more reliable disposition, which then we would be rational to value instead. But the meta-disposition of adopting whatever other dispositions are, as far as we can see, more likely than others to satisfy our desires, has no viable alternative; no other disposition can do better than it across all the possible situations we might find ourselves in.
I am sorry for the delay. Our conversation got me behind on some other concerns I needed to catch up on. I have not fully caught up, but I wanted to get back to this.
ReplyDelete(1) True, insofar as you and Archimedes both both have a desire to believe that which is true, you both have a motivational state (desire) that can be fulfilled by believing that water is made of H2O. Assuming, of course, that water is, in fact, made of H2O.
(2) I need you to define precisely what you mean by "subjective view". As I mentioned earlier, I have encountered so many different people using these terms in so many different ways, I am not confident is assuming any specific use in any given case.
For example, one definition of "objective" implies "intrinsic value" or "value independent of mental states". I deny the existence of such things. Accusing me of objectivity in this sense would be false.
Another definition of objectivity is "true independent of whether an agent believes or wants to believe that it is true". Note how things that are subjective in the first sense can be objective in the second sense.
Your earlier comments suggested a distinction between subjective and objective utilitarianism. That concerns the distinction over whether the right action is the action that actually maximizes utility, versus whether the agent believes (responsibly) that it maximizes utility. Since these both take action to be the primary object of morality, and I deny this, then I reject both views. It would be a mistake to identify me as somebody who defends objective act utilitarianism when I do not accept act utilitarianism in any form.
More importantly, the topic of my posts are practical reasons, not moral reasons. This leaves me to puzzle out how you are using the terms "objective" and "subjective" upside of its utilitarian moral context. I have had little success puzzling this out.
(3) I agree with the idea that intentional actions are the result of beliefs and desires. Desires select the ends or goals, while beliefs identify the means - accurately, if the beliefs are true and complete. There are also habits, and a reason to distinguish beliefs held in long-term memory from beliefs held in working memory, but these complications can often be set aside in these discussions.
(4) When you begin to write about reasons to believe that this button is the button to press for getting a glass of water, I sense that you are once again blurring the distinction between epistemic justification and value. As mentioned above, an intentional action is a combination of beliefs and desires. Insofar as beliefs are important, there is a place for examining the epistemic justification for the belief. However, the result of this investigation will tell us nothing about the value of the action. It will only tell us about the perceived value of the action.
ReplyDeleteConsider an agent who enters a contest – to build a robot. She purchases some computer hardware that turns out to have a defect. On the day of the contest, the defect causes the robot to crash and he loses the competition. She says in frustration, “That was a frippen waste of my time!” This is a statement of the value of the action. Her supportive partner says, “You had no way of knowing that the hardware was defective.” That is a statement about the epistemic justification of the belief. Both statements can be true at the same time. I am looking at the truth value of, “That was a frippen waste of my time!” I am not invalidating or even questioning the assessment of “You had no way of knowing the hardware was defective.”
When it comes to the first question – consolation does not come in the form of “You had no way of knowing….” Consolation comes in the form of, “Winning isn’t the only thing that mattered. Just participating in the competition counts for something. Besides, think of all you have learned along the way, and the new friends that you made.” These factors have nothing to do with the epistemic justification for the belief that the hardware lacked defects.
They are entirely irrelevant to whether the activity was a waste of time.
The agent might conclude, “I should have performed these other tests that would have revealed the flaw.” Again, we have two questions to answer: What is the value of the test? What is the epistemic justification for the belief component of the intentional actions for performing the test?
I simply have not had an interest in answering that question. I do not deny that it is an important question worthy of discussion. I have also not had an interest in pursuing medical research – though it would be foolish of me to condemn those who do so. It’s just not my question.
I am concerned with the actual value of the action - what it is about the action that makes it worth doing. I am not interested in its perceived value. I want to know its actual value. This is an ontological question – a question in the category of “What is there?” Belief is not relevant to its actual value.
I take J.L. Mackie’s question, “Are there objective values?” and Bernard Williams’ question, “Are there external reasons” to also be ontological questions – questions about what exists. Furthermore, I hold that the question of objective values as Mackie uses the term and the question of external reasons are the same question asked in two different ways. They both ask whether the value of an action can come from anything other than motivational brain states.
And, in fact, I agree with them. There are no objective values (as Mackie uses the term). And there are no external reasons of the type that Williams was concerned with. These things do not exist.
NP on the delay, a break was good for me too!
ReplyDelete“(1) True, insofar as you and Archimedes both both have a desire to believe that which is true, you both have a motivational state (desire) that can be fulfilled by believing that water is made of H2O. Assuming, of course, that water is, in fact, made of H2O.”
Well, we both agree with this; the question is whether this is (and is all that can or should be meant by) “having a reason to believe that water is made of H2O.” Of course, if for some bizarre reason all our science is radically wrong, and water is not made of H2O, does all the massive, if misleading, evidence provided by science, my teachers, etc., then according to this (I think your) principle, even I lack any reason to believe this. Which sounds odd.
“(2) I need you to define precisely what you mean by "subjective view". As I mentioned earlier, I have encountered so many different people using these terms in so many different ways…”
Yes, but I did explain it earlier; in this context (reasons), it is the view that what we have reason to do is a function of some (our) ends, and our belief or evidence about how to satisfy said ends. You described subjective utilitarianism excellently; subjectivism about reasons can be described mutatis mutandis, e.g. replace “the right action” with “a reason to act,” etc. As also noted earlier, I subscribe to the view that it is actually *evidence*, not *beliefs*, which is the relevant factor, but (unfortunately) the term is commonly used to cover either or both, sometimes indifferently, so this further qualification must be made explicitly.
I never suggested you were a utilitarian; the S/O distinction is used very heavily there, and informs my interest, but the same distinction can be made w/r/t reasons quite independently of Cism, though for some reason it rarely informs those who discuss and analyze deontological theories.
Subjectivism about *values* is quite another thing, and we seem to completely agree with the two different distinctions you made about those. But again, this never had anything to do with the point in contention here.
Again, I’ve made these points before too. Is it possible that you missed some of my posts? I’ve noticed that when I get an email alert to one of your posts, when you wrote more than one within a short space of time, the visible email link just sends me to the latest one, and sometimes I only later noticed that it was the second of two and had to move up to find the other. I am guilty of writing multiple posts more often, and perhaps you hence missed some of these. Sometimes I thought of something new to say; more often my intended post exceeded the 4096 character limit and needed to be split.
Let me press you on a previous though briefly mentioned example. You are a doctor who can give a patient in great pain any of pills A, B, or C. You know that A will definitely reduce his pain by 90%. One of B or C will completely alleviate his pain, the other will kill him; you don’t know which (let’s say it’s C). Down the hall is Dr. Z, who knows more about the patient’s condition and the pills; he can tell you which of B or C will cure the patient. Do you have a reason to go down the hall and ask Dr. Z for this information? (Assuming you correctly believe that Dr. Z knows these things; although from your past remarks you apparently think that your reasons would be unchanged even if you didn’t know this, or thought Dr. Z was a patient-killing idiot, or wasn’t there today or didn’t even exist.)
Your theory seems to require that you have no such reason, since all your desires (to cure the patient, etc.) will be maximally satisfied by giving one of the pills; even your desire to learn which pill cures will be satisfied by just giving C to the patient. Does this really sound correct to you? What am I missing here?
“(4) … I sense that you are once again blurring the distinction between epistemic justification and value. …Insofar as beliefs are important, there is a place for examining the epistemic justification for the belief. However, the result of this investigation will tell us nothing about the value of the action. It will only tell us about the perceived value of the action.”
ReplyDeleteSure; the question is whether reasons are just a function of actual values, or of perceived values. I agree that the value of the space-time event 1:(my hand, perhaps robotically or controlled by god-knows-or-cares-what-forces, moves to button C and presses it) is the same as 2:(my hand presses C, because I always select the third of three unknown choices without bothering to look for more information) and 3:(I intentionally press C in response to evidence E that it delivers water, or at least to learn if it does) is all exactly the same: the value of the water it delivers. But as a self-evaluating agent, do you not value yourself adopting the *disposition* to do 3 more than adopting the disposition to do things like 1 or 2? Apart from the value of any particular instantiation of such dispositions?
Do you think that “epistemic justification” is not among the things that we do? Indeed, is it not an essential feature of most intentional activity, without which we might as well be (rather crude) robots ourselves?
You are correct about what you initially say regarding the two people and the robot, but we aren’t discussing whether certain actions turn out to have wasted time, so this is irrelevant. But the first might say “I guess I had no reason to buy that hardware.” The second might (I think correctly) reply, “You most certainly did; the seller was reputable, and their parts usually work. You were under time pressure and, knowing this, you paid a premium for a part from a more reliable supplier. You had (acted on, responded to) the most excellent reasons to buy that part. You just got unlucky; good reasons for action don’t guarantee success in your ends, and sadly this was the case here.”
“The agent might conclude, “I should have performed these other tests that would have revealed the flaw.” Again, we have two questions to answer: What is the value of the test? What is the epistemic justification for the belief component of the intentional actions for performing the test?”
I presume you think the instrumental “should” is a function of her reasons for action. In which case, I think she’s just incorrect (or could well be, in the circumstances suggested above). Of course, her regret is entirely justifiable; and it could commendably motivate her to arrange things so that she does have time to test it, etc. next time around. Likewise, if the doctor (above) gives the patient pill A (Dr. Z being unavailable), and later learns from Z that pill C was the panacea it would be quite wrong—indeed, monstrously wrong, deeply evil and careless—to think “I should have given the patient pill C.” Or “I had reasons to give him C after all.” NO! “It would have turned out best if I gave him C”—sure, but that’s not the same thing at all. With a 50% chance it might have killed the patient, reaching for one of the uncertain pills at random, only under the description of it as “pill C” was most certainly not what the doctor should have done, not what he had reason to do, and no what would counted as reasonable action, even on a pure instrumental basis (he wants to keep his job, etc.) We could not possibly approve of any doctor’s disposition to respond to this kind of situation by giving pill C; we would strongly disvalue such a disposition, even if a particular implementation of it happened to cure the patient (and hence was valuable as a space-time event, albeit not as an intentional activity with an instrumental goal).
I just realized that the first paragraph on my first message yesterday was badly garbled. Let's try that one again:
ReplyDeleteWell, we both agree with this; the question is whether this is (and is all that can or should be meant by) “having a reason to believe that water is made of H2O.” Of course, if for some bizarre reason all our science is radically wrong, and water is not made of H2O, then according to this (I think your) principle, none of the massive, if misleading, evidence provided by science, my teachers, etc., provides any reason to believe this. Indeed, they don't provide any such reason even if water is made of H2O, because only facts provide reasons, and evidence is not facts (at least, is not = to the facts they are evidence for). Which sounds odd.
I was so wrapped up in the reasons debate that I forgot that the original reason (well, reason in my sense!) that I came to your site was because I heard of your self-published work on utilitarianism, so I mistakenly thought that in a recent comment you were distancing yourself from utilitarianism generally. But as you said, it is only act-U that you reject. Again though, I never suggested that you followed this, and was only pointing out that the O/S distinction applies to utilitarian morality, and where the O-version in particular has weird implications. Indeed, my subjective utilitarianism is a version of rule-U, since the disposition to respond to evidence E with subjective-utility-maximizing action A is, of course a rule. Act utilitarianism seems to fit more naturally with the objective conception of reasons, although it needn't do so if one splits apart "reason to A" and "ought to A" judgments. That's why I occasionally speculated about whether you would do this, and why, a question you've avoided so far in our conversation.
ReplyDeleteWhile I don't have your book, you have put your ideas on the web, for example at http://everydayutilitarian.com/essays/fyfe-hurford-debate-alonzo-round-one/ where you say:
"Desire utilitarianism says that the utility of desires is to be determined by looking at the situations in which they are likely to play a causal role. "
Now this is a little puzzling given your repeated insistence that only actual outcomes of actual acts can give value to the act, and hence constitute a reason to perform the act. Again, you're not the only person to apply different standards to reasons versus moral judgments. But here you're not even quite making a moral judgment yet (though I presume it directly implies one, since you earlier said that we ought to "do that act that conforms to the best rule set, where the best rule set maximizes utility.") You're just talking about utility; which is a kind of value, no? But you are saying here that utility is *not* merely determined by the actual causal outcome of an action, but by its average or typical effects. Which makes me wonder why you are so repeatedly insistent in our conversation that the value of an act cannot possibly be changed by anything about the intentions, motivations, or any other contextual antecedents of the act, but is a function of its actual direct outcomes only.
That is, you *could* say, as others have sometimes said, that if a certain rule/strategy for responding to evidence worked 99% of the time to satisfy your goal, then in the 1 in 100 times it fails, you still *ought* to act according to this rule, even though you have no *reason* to do, as doing so generates no value. That would be strange, as I've suggested before. But the problem here is even odder than that; you're saying that such a rule can have utility, even though implementing it has no value. Are you distinguishing here between its *general* utility/value and its *actual* utility/value in a given case (or, between the reason to "write a rule into our brains" and the reason for following it in a given instance, both based on this differential value?), saying that former generates oughts, the latter reasons? This again would have some consistency, though I see some problems with this approach to (if it is yours). But in any case, if this is the distinction you're making, then I don't see why you are having difficult understanding my view, since this is precisely the distinction I've been repeatedly insisting upon, differing only in that I say that the general value of a rule can give one a reason to follow it, even in the particular cases where it doesn't lead to optimal results.
I'm just guessing here about how to make sense out of this, as I'm not immediately finding any document where you directly explain the relationship between a reason to act and the normativity of a rule--please correct & clarify as needed.
Just to be clear: your quoted line of course talks about utility of *desires*, not of rules/dispositions. But since you continue saying that you'll define their utility "by looking at" (i.e., by looking at their average effects?) in cases where "they are likely to play a causal role" (i.e., those in which they tend to cause, in combination with some evidence, certain actions of the agent to try to satisfy said desires?), I trust that translating this statement into the language of rules/dispositions is a reasonable interpretation of what you meant. Again, clarifications are very much encouraged if I'm reading this wrongly.
ReplyDelete