You said in one podcast interview that it's problematic because there is no impartial observer or social contract. I argue, though, that it can be grounded in the propositions that would be true if there were such things.
I am going to respond to this objection in two forms - in the form that it appears in Gage's comment, and the form that it appears in Gage's post that he links to in the comment.
I will take the current form first.
Any argument that has premises that "would be true if there were such things" can only support conclusions that "would be true if there were such things". It cannot support premises that are true - unless "if there were such things" is changed to "such things are real."
"I am not an only child" would be true in a universe in which I had a brother or a sister. But that does not make it true in which such things do not exist. In the real world, we still have to look at whether I do or do not, in fact, have any brothers or sisters.
This is my main objection to impartial observer and social contract theories. They start with admittedly false premises. "There is an impartial observer," and "There is a social contract" are false.
Now, on to the second form of the argument. Mike provided me with a link to his post on this matter where he provides a different argument.
Under a contractarian framework, I think we get truth value from reference to a proposition. For example, to say we have reasons to prevent and condemn action x is to say that the following proposition is true: "A perfectly rational being in the original position would have reasons to prevent and condemn action x." What we are really grounding our morality in is rationality itself and we can point to these propositional truths in order to be describing an objectively true fact of the matter.
(See Atheism and Evil Part 2.)
The "Original Position" here is behind John Rawls' veil of ignorance, in which the agent is unaware of the position he will hold in the society whose rules he is evaluating.
This is a different argument, and it invites us to ask the question, "What reasons does this perfectly rational agent have to prevent or condemn action x?"
What is her answer?
"I condemn action X because a perfectly rational agent would condemn action X, and I am a perfectly rational agent, and I condemn action X."
That's not a very satisfying answer.
Look at it this way:
To say that the squares of the two sides is equal to the square of the hypotenuse is to say that the following proposition is true: "A perfectly rational being in the original position would have reasons to believe that for a right triangle the squares of the two sides is equal to the square of the hypotenuse. What we are really grounding our math in is rationality itself and we can point to these propositional truths in order to be describing an objectively true fact of the matter.
Now let's ask this perfectly rational mathematician why she believes the squares of the two sides is equal to the square of the hypotenuse.
I trust that we would not be satisfied with the answer: "I believe it because a perfectly rational agent would believe it and I am a perfectly rational agent, and I believe it." We would want her to provide her reasons for believing it. Once she does, we can then adopt those reasons as our own, and do away with the perfectly rational mathematician. She was merely a place holder for whatever reasons she would offer in support of her belief.
So, the perfectly rational agent in Gage's example is also merely a placeholder for whatever reasons she should give for condemning X - reasons that, once we knew, we could adopt as our own.
However, in the case of condemning X, we run into another problem. Is it the case that the reasons she has for condemning X are necessarily reasons we can adopt as our own?
She may make herself a peanut butter sandwich because she likes peanut butter sandwiches, or refuse a sandwich because she is allergic to peanuts. The fact that she takes a particular action does not justify the conclusion that I should act the same way - not if the reasons she uses are not reasons that I should adopt as my own. The fact that she likes peanut butter sandwiches does not imply that I should adopt a liking for peanut butter sandwiches as my own.
The "veil of ignorance" may be an attempt to deal with this. It makes each decision-maker ignorant of their own desires so that they cannot use them in making a decision. But they are supposed to be aware of the fact that such preferences exist and she might have them.
However, agents only act on the desires they have - not the desires that they know to exist. I may know of your aversion to pain, but whether that will motivate me to avoid states of affairs in which you are in pain or cause them depends on whether I have a current aversion to you being in pain or a desire to see you suffer. Without desires of my own on which to base a decision, I am indifferent to your pain. So, a perfectly rational agent ignorant of his own desires would choose nothing.
However, a more important problem is unjustified logical leap from what an imaginary agent in an imaginary situation would do to what we should do.
It might be perfectly rational for perfectly rational agents to adopt a particular set of rules behind a veil of ignorance. However, once the veil is lifted, and a flood of new information becomes available, the perfectly rational agent does not simply ignore this information. She uses it to reassess and revise the conclusions she drew while ignorant, and to adopt new conclusions based on new and better information. What is rational in a state of ignorance is often quite irrational in a state of having information.
The fact that it would be rational for me to leave the building in a state in which the fire alarm is going off does not imply that it is rational for me to leave the building at this moment, when the fire alarm us not going off. Even if I was a perfectly rational being with good reason to leave the building when the fire alarm goes off, this does not imply that everybody should leave the building at this moment. These types of inferences just do not have any logical validity.
So, not only is it the case that the perfectly rational agent is a mere placeholder for the reasons she has for believing something, in the case of an action (and condemnation is an action), the reasons she has are not necessary reasons that we have any reason to adopt as our own. And actions that an imaginary agent would take in an imaginary world in a state of ignorance does not imply anything about the actions real agents should take in the real world in a state of non-ignorance.
For these reasons, I reject social contract theory.
Now, I want to stress, there are moral facts. The failure of social contract theory does not imply a failure of moral realism. It's just that this particular route to that destination has far too many logical roadblocks. We have to look for another route.
An objective morality requires premises that are true in the real world, and does not try to draw inferences from what is imaginary (perfectly rational agents behind a veil of ignorance) to what is real.
Briefly - the conclusions that I would defend say that we really need to ask our hypothetical perfectly rational person a different question. Without assuming any ignorance, ask her, "What malleable desires do people generally have the most and strongest reason to promote using social forces such as praise and condemnation? And what actions would a person with those desires perform?" When we ask the hypothetical perfectly rational and fully informed agent this question, the agent is, in fact, a mere place holder for a set of objective facts. And the reasons she gives for whatever answer she gives us is made up entirely of reasons we can then adopt as our own reasons for adopting the same conclusions. This meets the criteria for an objective morality.
Alonzo,
ReplyDeleteThank you for taking the time to respond. I want to take time to digest your criticism and reread it a few times before giving a fuller response (or perhaps agreeing with you). But I did notice one thing that I think we can clarfy quickly.
I am not proposing that the veil of ignorance means ignorant of the relevant circumstances, surrounding facts, etc. In the comments, I responded to someone to say that of course we have to consider circumstances. I don't think we could derive proper reasons for action otherwise. The rational agent would have access to all relevant facts under my framework. I considered including that in the post, but I was trying to cut it down some to be more easily digested. So, I just take the veil of ignorance to represent a lack of undue favoritism for one party.
I don't intend it to be exactly analagous to Rawls, but I do think his thought experiment is a useful tool to draw these things out. So, I'm definitely influenced by him, as I'm influenced by your work which you may have noticed in the writing. I think the answer might lie somewhere in between.
I like the clarity and thoughtfulness of your writing, Alonzo. Too bad more people can't write about moral theory this way.
ReplyDeleteOk, I've tried to go through and provide my thoughts.
ReplyDelete“This is my main objection to impartial observer and social contract theories. They start with admittedly false premises. "There is an impartial observer," and "There is a social contract" are false.”
I think we’re dealing with would counterfactuals, so it doesn’t seem problematic that there is not really an impartial observer. If I were to say, “the impartial observer requires x,” then that seems problematic. But if I say, “an impartial observer would require x,” then I think we really can get a value of true from that proposition. But I want to focus mainly on the form of the argument in my article, as any differences were inadvertent and due to writing out a quick note to you in a comment.
“This is a different argument, and it invites us to ask the question, "What reasons does this perfectly rational agent have to prevent or condemn action x?"
What is her answer?
"I condemn action X because a perfectly rational agent would condemn action X, and I am a perfectly rational agent, and I condemn action X."
That's not a very satisfying answer.”
I don’t think that would be the answer. Let’s say you wonder whether a rational agent would condemn action x. If the agent would, then you can ask what reasons she has. Then, reasons can be given. So, we can answer the “what reasons” question. She certainly would provide reasons. Similarly, in your mathematician example, the answer is not that she is rational. The answer would be the reasoning informing the principle.
The trouble comes in when we have the next question: “Why should we act based on those reasons?” Or you might ask, “What reasons are there for those reasons?” At that point, we seem to reach the dead end where we cannot give any further justification for rationality then rationality itself. I think every theory reaches that point, so I don’t see how that counts against the theory. I would assume desirism also reaches such a basic stopping point. Is this correct?
“So, the perfectly rational agent in Gage's example is also merely a placeholder for whatever reasons she should give for condemning X - reasons that, once we knew, we could adopt as our own.”
So, we agree that the reasons should be given as the answer to the question. I don’t see how that would not be the answer. I don’t know that I would call the answer a placeholder as much as a useful tool for discovering the truths. We could avoid reference to any agent and just say “it is rational to do x” or “it is rational to believe x.” I just happen to like using Rawls’ framework as a method to achieve this and the question seems to really hinge on whether we should behave rationally. I can’t imagine an answer to this question that doesn’t appeal to reasons (which delivers the problem I noted above).
“The "veil of ignorance" may be an attempt to deal with this. It makes each decision-maker ignorant of their own desires so that they cannot use them in making a decision. But they are supposed to be aware of the fact that such preferences exist and she might have them.”
Like I said in my first comment, I want to include in my framework that such desires are known, it’s just not being decided in a biased manner. So, the rational agent, while she considers whether I have reasons to do x weighs as part of that my desires and the desires of others affected. And when she turns to consider whether you have reasons to do x or y, she does the same for you and those affected by your x-ing or y-ing.
Continued...
“However, agents only act on the desires they have - not the desires that they know to exist. I may know of your aversion to pain, but whether that will motivate me to avoid states of affairs in which you are in pain or cause them depends on whether I have a current aversion to you being in pain or a desire to see you suffer. Without desires of my own on which to base a decision, I am indifferent to your pain. So, a perfectly rational agent ignorant of his own desires would choose nothing.”
ReplyDeleteThe agent in my framework is informed of such things. So, for example, she understands pain and that you will generally want to avoid it. She knows and considers your desires.
So, the reasons for action include your desires. We could flip the example to the reverse and say that you wouldn't be able to act on your desires and beliefs without using reason. They are intertwined somehow. I'm not sure how to properly describe their relationship, but that's why I feel like there is something between desirism and contractarianism.
“However, a more important problem is unjustified logical leap from what an imaginary agent in an imaginary situation would do to what we should do.”
I think I’ve covered this in my previous comment.
“An objective morality requires premises that are true in the real world, and does not try to draw inferences from what is imaginary (perfectly rational agents behind a veil of ignorance) to what is real.”
I think the premises have a truth value in the real world (and at least some of them are true) in the same way that would counterfactuals have truth value. It does have to refer to a possible world, which isn’t ideal as far as appealing to common sense, but it can still be true in the real world.
So, what do you think given some of these clarifications?
i think another problem with the contract theory is that the rational person behind the veil would need a theory of ethics to use in deciding upon the best course of action.
ReplyDeleteit is an interesting thought experiment about removing bias from the decision making process but it fails as a theory of ethics because the rational person would need an ethical code in order to judge which choice was better (more ethical)than any other choice.
without an ethical code constraining the rational person they could evaluate how much pleasure could be gained from owning slaves and at what percentage of the population would be small enough that the liklihood of me being one of those slaves is small enough that it is in my best interest to allow slavery. rational self interest could find the point where it is worth the risk to allow slavery.
without the rational person applying some sort of ethical theory from behind the veil it is not garanteed that an ethical state of affairs would develop.
unbiased and the ethical
"good" are not the same thing.
i would say the bieng unbiased is useful for being ethical but it is not enough you need more. necassary but not sufficient.
Kristopher,
ReplyDeleteHow would you form the ethical theory without using reason? Isn't employing some sort of reasoning temporally prior to having any coherent system of ethics?
Peter Singer makes a case for a rational basis of ethics in his book The Expanding Circle. Essentially, the question is one of justification: can I justify this action to those it would affect in a way that they would accept? For example, if my tribe splits food evenly, but I want to take more than an even share, I need to justify myself to the tribe in order to avoid rousing their anger. I may, for example, say, "I'm a warrior, and my capacity to defend you will be greater if I eat more than you do." As society grows more complicated and our actions affect more people, though, we will need to have progressively better reasons.
ReplyDeleteSinger expands this reasoning into preference utilitarianism: the best way to make sure my actions are justifiable to society is to ensure that my actions satisfy the preferences of others to the best extent possible. Using this, you can arrive at something rather like Rawls' theory of justice, but without needing to posit a veil of ignorance or posit perfectly rational beings.
Hi Guys
ReplyDeleteThis is both a fascinating debate and a very rational one to boot. Keep it up!
Here are my 2 cents worth, noting, in case you do not know this Mike, that I am an advocate of desirism, regarding it, to date, as the best working hypothesis to explain morality. This is, of course, a provisional and defeasible position, revisable should I find something rationally and evidentially better.
You raise the issue of counterfactuals and, yes, it is true that any model of morality has to be a model that can handle counterfactuals, as without those it would not be possible to criticize moral systems nor make predictions. This is and indeed must be a common feature of any and all moral models and, so, it is not a basis to defend any particular one. That is, this is not the issue w.r.t Rawl's Contractarianism or your defense of something similar.
The two features that Alonzo, eloquently and concisely as usual, delineates are (a) the mis-representation of what a rational person would do going from the Original Position to reality - they would - if they are rational - update their beliefs in the light of this new information - which renders the insights (and reasons) from the hypothetical Original Position dubious, if not irrelevant and (b) the artificality of the reasons determined in the Original Position.
Now there is a shared intuition behind both the Veil Of Ignorance/Original Postion and Alonzo's fundamental question in the desirist framework of what would "people generally have reasons to promote and inhibit... etc." and this is, in my words, to identify the common and near universal shared grounds that are independent of any specific cultural biases and distortions and the pragmatical rationality employing informal fallacies such as might, tradition, antiquity, popularity, might and ideology used to defend those (and deny that they are) biases and distortions. Such a process leads to the objective grounds for moral analysis.
Further this intuition is not just an intuition but is the basis of what anyone could reasonably mean when they talk about morality.
Now the difference is that Alonzo's approach, again in my words, strips away these biases and distortions - as I sometimes put it requires transcending one's time and place - and shows what is left, that already did and still does actually exist. (Hume said it first btw but I cannot remember any good quotes to that effect).
There is still a process of Reflective Equilibirum and/or provisional and defeasible claims, that is claims open to criticism and updating on the basis of failing to have successfully identified and removed some biases and distortions such an analysis, so it is not Reflective Equilibrium that distinguishing these two (and other) approaches. It is what the Reflective Equilibrium is performed upon that is the issue.
And there is the difference: the Original Position creates an artificial hypothetical scenario to obtain reasons that both may well fail in reality and also provide no motivation for anyone to follow versus the "people generally" analysis which uncovers existing reasons in reality which are already motivating for many (for sure not all) once those distortions have been removed.
I think that there is a social contract, but I am a little more skeptical of the attempt to draw moral codes and norms from it.
ReplyDeleteFor instance, I accept that I have an obligation to pay taxes for the benefits that I incur from being a citizen of my community (clean air, clean water, peace, educated neighbors, and so on), all of which cost money and joint effort to maintain.
Desires obviously give the social contract a moral foundation, indicating which objectives and norms would be correct to promote, standardizing correct uses of force, and so on. But there is still nonetheless a tacit set of agreements that we enter into with our neighbors in order to foster cooperation and social cohesion, absent of which would be anarchy and unregulated force (e.g. Somalia), which would be bad for all persons involved.
faithlessgod,
ReplyDeleteConsider this. In Morality in the Real World, Alonzo and Luke discuss Alph and Betty as an introductory example to desirism. We understood that Alph wanted to gather stones and Betty wanted to scatter them. From what type of position were we doing this reasoning and drawing conclusions? Was it from either Alph's or Betty's? No, it was from the type of position I propose - an impartial, fully informed, and perfectly rational position.
I am not necessarily rejecting the importance of desires or beliefs, but we would never make sense of these things or what we ought to do with them if we didn't have a foundation of reason.
So, two options seem available: Either reason is a more basic foundation or it is a silent implied partner to desires and beliefs. I am inclined to think it is more basic and is the most ultimate foundation we will be able to find. For one thing, I'm not sure you can even have beliefs without some method of reasoning. Desires might require the same thing, but some carnal types of desires make me less certain.
I should refer you to where I have written my objections to act-utilitarian theories, such as preference satisfaction act utilitarianism (preference utilitarianism).
ReplyDeleteDesire utilitarianism vs. act utilitarianism)
Mike Gage
ReplyDeleteI don’t know that I would call the [perfectly rational agent] a placeholder as much as a useful tool for discovering the truths.
Perhaps it could be - but I tend to think it would be as useful as the perfectly rational mathematician. That is - not at all. A lot of math questions get answered without anybody thinking in terms of the perfectly rational mathematician. They give us the answers that, to the best of our ability to determine, a perfectly rational mathematician would give, without ever thinking in those terms.
In the case of ethics, I think the perfectly rational agent is a destraction. A lot of moral philosophers - Rawls included - stop at the hypothetical agent and don't go any further. They assert that the perfectly rational agent would give us certain conclusions, but do not give us any reason for those conclusions that can stand independent of the rational agent.
It is best to set the hypothetical agent aside and look at actual reasons in the actual world.
The trouble comes in when we have the next question: “Why should we act based on those reasons?” Or you might ask, “What reasons are there for those reasons?” At that point, we seem to reach the dead end where we cannot give any further justification for rationality then rationality itself. I think every theory reaches that point, so I don’t see how that counts against the theory. I would assume desirism also reaches such a basic stopping point. Is this correct?
Desirism does not reach that point.
You have described the problem with foundational ethics. It suggests that there must be some foundational moral principles that are self-evidently true and that cannot, themselves, be justified. You are claiming that all moral theories must be foundational and have at its base one or more of these self-justifying principles.
Actually, Desirism is a coherentist position. A good desire is a desire that tends to fulfill other desires. Those desires are evaluated according to how well they filfill other desires, and so on, and so on.
In this case, it is much like Rawls "reflective equilibrium". Rawls' theory itself is coherentist - rather than foundational. Only, the problem with Rawls is that some of the elements in his web are fiction. They exist only in the realm of "let's pretend" and not in the real world.
Desirism has no place for "let's pretend" principles. There are desires, states of affairs, relationships between desires and states of affairs (whether there is a desire that P, a state of affairs S, and whether P is true in S), and facts about the degree to which desires can be molded through social forces such as praise and condemnation. There is no room for propositions that simply are not true, and no need for self-evident truths or self-justifying moral claims.
I'm not familiar with the literature on coherentism, but I know roughly what it defends.
ReplyDeleteIt seems possible that it is just pretending it does not require a foundation. I can't imagine that there is not a point where we could ask a why question. Why, for example, should we favor coherentism? Can we answer such a question in a non-circular way? It seems like it creates a de facto foundation, but calls it something else. Even if that isn't the question that gets us there, I think if we dig deep enough we'll find it.
Perhaps someone has come up with a good answer to this criticism, but I'm not familiar with it.
Mike
ReplyDelete"Consider this. In Morality in the Real World, Alonzo and Luke discuss Alph and Betty as an introductory example to desirism. We understood that Alph wanted to gather stones and Betty wanted to scatter them. From what type of position were we doing this reasoning and drawing conclusions? Was it from either Alph's or Betty's? No, it was from the type of position I propose - an impartial, fully informed, and perfectly rational position."
This point appears to be a non sequitur in response to my contrast between the "Original Position" and "People Generally" and also redundant and irrelevant.
It is redundant because it is a taken as a given, or is implicit, in rational-empirical debate that there would be a convergence of conclusions if participants are aspiring to epistemic objectivity whether the topic was physics, law, history, paleontology, biology etc. and the same goes here. The burden is on you to show that there is some difference that makes this explication more than just that. It serves no purpose to make this explicit.
It is irrelevant because it fails to address the distinction that myself and Alonzo have made over these different positions.
There is also an error if I am charitably seeing how this point can apply not to know how we know what to conclude but that this is somehow part of the subject matter at hand. Two or more agents might be fully informed etc and still come different conclusions based on differing desires. That is it is a necessary but not sufficient condition you are trying to impose.
This is highlighted when you say:
"I am not necessarily rejecting the importance of desires or beliefs, but we would never make sense of these things or what we ought to do with them if we didn't have a foundation of reason."
Where is the evidence to support this? Cognitive and social pyschology coupled with logic, especially the "Biases and Heuristics Program" has a wealth of contradictory evidence that does not support this.
You argument here continues to fail to address the distinctions between our positions and the problems of the Original Position
"So, two options seem available: Either reason is a more basic foundation or it is a silent implied partner to desires and beliefs. I am inclined to think it is more basic and is the most ultimate foundation we will be able to find. For one thing, I'm not sure you can even have beliefs without some method of reasoning. Desires might require the same thing, but some carnal types of desires make me less certain."
You seemed to have switched to a Griffin, Brandt or Railton style Informed or Rational Desires or Social Rationality type of ethical naturalism, which is a quite different debate.(More interesting IMV but still criticisable based on the psychological evidence).
So none of your response addresses the two previous issues: (a) of the artificiality and hence irrelevance of reasons (in terms of our motivation) in the Original Position (b) that of a properly rational agent updating their reasons in the light of new information provided by the actual world leaving the Original Position behind and rendering conclusions made there irrelevant.
faithlessgod,
ReplyDeleteMost of what you said seems to completely misunderstand my position. Perhaps I'm guilty of not being clear enough. I'll try and be completely explicity here.
Forget Rawls because I'm not doing a find and replace of the Theory of Justice and replacing it with morality. I'm just borrowing some of his general ideas.
So, let's address your two issues.
(a) of the artificiality and hence irrelevance of reasons (in terms of our motivation) in the Original Position (b) that of a properly rational agent updating their reasons in the light of new information provided by the actual world leaving the Original Position behind and rendering conclusions made there irrelevant.
The rational agent in my example does not enter the real world. There is no such shift. If you're bothered by what you would know upon entering the world, then just add that to what the agent knows. Anything you think the agent needs to know that hasn't been considered yet can be added. That's why I've said this agent would need to be perfectly rational and know all of the relevant facts.
There it is in its simplest form - whatever you think needs to be added to the description of the agent, go ahead and add it.
Also, if you want to support your claim about psychology, please give one example of intentional action that involved no reasoning. Any example should either include reasoning (it could be bad reasoning, but you have to at least think there are connections in order to draw a conclusion) or it will not be intentional action.
Mike
ReplyDeleteW.r.t. a perfectly rational agent entering the real world, it is not a question of them just updating what they know, it is a question of the relevance and motivations of what was determined in the Original Position. You have not addressed that at all.
W.r.t. irrational behaviour of real human beings it is not that they do not use reason to come to their conclusions but that their reasoning is biased and distorted just google for this, it is quite uncontroversial.
Finally your response has failed to address the key concerns that an approach such as yours needs to answer. In particular you appear, now, to be taking the approach of rational desires not contractarianism and you have responded with nothing about that question at all. Nor, in extension, have you addressed the issue of epistemic objectivity which renders your posit of an explicit perfectly rational agent either trivial and irrelevant or requires you to substantiate this posit which you, again, have not yet done.
Maybe the problem is I have not read the original post that Alonzo was addressing. I will look to see if that is the case. However I cannot see this absolving you from responding to the questions I have already presented.
Mike
ReplyDeleteOk I have (re)read you post and this does not alter any of the issues that I have asked you. For example a key piece was:
"For example, to say we have reasons to prevent and condemn action x is to say that the following proposition is true: “A perfectly rational being in the original position would have reasons to prevent and condemn action x.” What we are really grounding our morality in is rationality itself and we can point to these propositional truths in order to be describing an objectively true fact of the matter."
All of my (and Alonzo's) questions are addressing this and so, no, it does not appear that I have misunderstood your position.
faithlessgod,
ReplyDeleteI'm not saying that everything in my original post is going to make the cut or describe what I want to say perfectly. That's why I sought criticism. I feel my description in the post was too contractarian because it is familiar to me, but I think my position is actually a bit different from that the more I think about it.
I was going to respond to your posts, but I really don't think it will be helpful at this point. You're criticizing a straw man. I recognize I may be partially at fault for that, and I apologize for causing confusion.
I'll just leave this as my final comment/question. Unless there is some great response to what I said about coherentism, then desirism needs to be based on something. The things that are assumed to be important for desirism, like using sound reasoning and having the correct facts, are actually accounted for by a proposal like mine.
Mike
ReplyDeleteI think we are getting close to the core of our disagreement and/or misunderstanding.
"Unless there is some great response to what I said about coherentism, then desirism needs to be based on something. The things that are assumed to be important for desirism, like using sound reasoning and having the correct facts, are actually accounted for by a proposal like mine."
First what you are proposing also requires a form of coherentism, the issue at hand is over what.
Principally it appears that you are equivocating over what is meant by "rational". There are two distinct concepts you are failing to distinguish and this is the point that I have been making.
It is taken as a given in rational and empirical debate and analysis to operate from and aspire to epistemic objectivity, that is anyone transcending their preferences, prejudices and biases would converge to the same conclusions on such a basis. This is so regardless of the subject matter whether it is history, forensics, physics or any other and, here, I regard morality as just another such topic unless an argument can be provided as to why it is an exception, which has not done to date in this thread.
(Further this also includes conclusions such as that there is insufficient data to have a tentative result or that the data leads to indeterminate results and so on, being, of course, dependent on the data and tools available).
Now you are arguing for a quite different notion of rationality based on an idealised fully informed perfectly rational agent operating under a veil of ignorance and this is a quite different issue making such a concept part of the subject matter at hand.
Like you, I am not going to repeat my criticisms of this but it does no good to state, as you did in the above quote, that our arguments (mine and Alonzo's) leads to or needs such a scheme as yours. It is a non sequitur and, as previously noted, an equivocation and I have been asking you to provide an argument as to why you think that is so, which you still have failed to provide.
faithlessgod,
ReplyDeleteI didn't present an argument exactly, but did ask a question earlier about coherentism, as follows:
"I'm not familiar with the literature on coherentism, but I know roughly what it defends.
It seems possible that it is just pretending it does not require a foundation. I can't imagine that there is not a point where we could ask a why question. Why, for example, should we favor coherentism? Can we answer such a question in a non-circular way? It seems like it creates a de facto foundation, but calls it something else. Even if that isn't the question that gets us there, I think if we dig deep enough we'll find it.
Perhaps someone has come up with a good answer to this criticism, but I'm not familiar with it."
I don't think anyone responded. If I'm wrong and it doesn't need a foundation, that's fine, but I'd like to hear why.
I'm not going to respond to the rest, as I said in my previous comment but I am interested in this coherentism issue.
Mike
ReplyDeleteCoherentism is Alonzo's argument not mine. I subscribe to the less controversial correspondence theory of truth.
The issue between the two theories of truth is not substantive to the debate here, which is over your still unanswered obfuscation over two distinct conceptions of rationality. Until you can answer this, that your conclusion can be dismissed as being based on a non sequitur trading off equivocation on "rationality".
Coherentism is not a theory of truth - and is fully compatible with a correspondence theory of truth.
ReplyDeleteIt is a theory of knowledge or justification. It denies the existence of fundamental or foundational self-evident truths that need no defense. Instead, it says that justification requires having place in a large web of propositions, none of which are foundational, and all of which are justified by the quality of connections it has between itself and other propositions (linked, themselves, to yet still other propositions, and so on).
I have never seen an argument that coherentism ultimately requires some fundamental proposition.
The major criticism of coherentism is that you can build a huge complex web of propositions that, still, are not anchored to the real world. Though coherentists answer this by including propositions acquired through observation a part of the network of propositions to be connected.
I thought you were referring to The Coherence Theory of Truth.
ReplyDeleteThe formulation you just indicated is, indeed, compatible with The Correspondence Theory of Truth and I quite I agree with that formulation, as a theory of knowledge not truth.
Having cleared that up, this discussion has moved on to a new post. Lets continue discussion there, depending on Mike's or other's responses.