One of my objections to the idea that we have some sort of ‘motivational belief’ is the difficulty in squaring such an entity with our evolutionary history.
Let us assume that an entity in nature evolved a disposition to ‘do what one (believed that) one ought to do’. How would such a trait affect our evolutionary history?
Well, let’s compare two creatures; one with a belief that X1 ought to be done, and another with a belief that X2 ought to be done. Of these two creatures, the one who is going to have the most offspring is the one whose belief that X(n) ought to be done leads to the greatest genetic replication. Nature is not going to favor the creature that can most accurately identify whether X(n) really ought to be done, unless there is some necessary connection between ‘X(n) ought to be done’ and ‘A disposition to do X(n) will result in more genetic replicaIf there tion’.
This is what we see with desires. We did not evolve dispositions to desire that which is ‘intrinsically’ good in any sense of the word. We evolved dispositions to desire those things that were more likely to keep our biological ancestors alive long enough to create viable offspring. Those ancestors who did not have these desires . . . well, they are not our ancestors.
So, we have a desire for sex. Note: we do not have a desire for reproduction. Some people might have this desire but, for the most part, reproduction is an unintended and often unwanted side effect of sex. We want sex even when we do not want reproduction; thus, we look for ways of having sex while avoiding reproduction.
We have a desire to eat – and we have a taste for those foods that kept our ancestors alive long enough to raise their children to the point that those children can have grandchildren. Thus, we have a preference for high-calorie foods – fats, sugars. We do not eat so that we can stay alive. We stay alive so that we can eat.
If we had motivating beliefs, then evolution would have picked out the objects of those mental states using the same forces it used to pick out the objects of desires – according to whether pursuing that object promoted or inhibited evolutionary fitness. In other words, the objects of our motivating beliefs would be substantially the same type of things as the objects of our desires. Neither set of objects warrants being called ‘inherently valuable’ or ‘worthwhile’ more than the other.
If there are ‘intrinsic goods’ or ‘inherent value’ or ‘worthwhileness’ in the real world, evolution would have thwarted our ability to perceive them (or perceive them accurately) unless they happened to coincide with what promoted our genetic replication. If this is the case, we have no need for a concept of ‘that which has inherent value’. ‘That which, when desired, tended to promote the genetic fitness of those who desired it,” is good enough for all real-world purposes.
For all we know, there might be an inherent goodness in killing and eating one’s own children. For all we know, some of our ancestors developed a faculty for perceiving this value and responding appropriately to it. That is to say, they killed and ate all of their children. One thing we do know is that if this ever was the case, those who can perceive this inherent value correctly would not be our ancestors. We have a better chance of being descendent from ancestors who had a perverse reaction to the inherent value of eating one’s own children and, as a result of this perversion, shunned the practice, and protected their children instead.
So, my question for those who hold that we have somehow evolved the capacity to have motivational beliefs is to explain how this capacity evolved and how it remained uncorrupted by evolutionary forces, given the effect that different motivational beliefs would have on genetic fitness.
Beliefs are mental states that fit the state to the world, such that if a belief does not correspond to the world then the belief should change. We can tell an evolutionary story of the value of matching beliefs to the world. The lion who does not believe that there is an antelope when there are antelopes will starve. The antelope who does not believe that there are lions where there are lions will become dinner. There are consequences when our beliefs about the world around us are untrue, which suggests that there are forces that aim to make our beliefs increasingly reliable – at least in those areas relevant to our genetic replication.
We also have a story to tell about desires. We have evolutionary stories to tell about the evolution of a desire to have sex, a desire to care for one’s children, a desire to eat high calorie foods, a desire for an environment that is not too warm or not too cold, an aversion to pain, and a disposition to feel pain when confronted with states of affairs that threaten our genetic fitness.
We even have an evolutionary story to tell about the malleability of our brains. If you hard-wire a brain for a particular environment, then the being with that brain is going to have a terrible time of it when that environment changes. The being that will survive environmental changes is the being whose brain changes to generate behavior that is appropriate in the new environment. The brain must not only change, but it must make the right types of changes. This means that it must be a brain that determines its shape as a result, not of genetic hardwiring, but as a result of interaction with the environment. In other words, it learns how to behave.
It is difficult (impossible) to come up with a similar story for motivational beliefs about worthwhileness.
The first question is whether there is anything in the real world essential to our survival that has worthwhileness to fix the beliefs to. If the beliefs do not have anything to fix to, then the fact that these are supposed to be motivational beliefs creates a problem. Without an anchor relevant to survival, evolution will fix these motivational beliefs on the same type of things that it fixes desires to – things that tend to promote the genetic replication of the agent. Agents may view these things as having ‘worthwhileness’, but the objects are, nonetheless, the same class of objects that desires point to, and for the same reasons (because those motivated to pursue these ends survive and have offspring).
Or, better yet, stick with the standard types of beliefs and desires, and treat ‘worthwhileness’ as a false belief in intrinsic values, blended with the realization that people generally have reason to promote those desires that tend to fulfill other desires, and inhibit desires that tend to thwart other desires.
Now, I fully recognize that evolution is not guided by any sort of sense. Evolution can promote traits that serve no purpose, and even promote harmful traits. This may be the case with motivational beliefs. However, in those types of cases, we are talking about traits that we can observe. We know that they exist, and our job is to explain them. When we talk about motivational beliefs, about things for which there is no strong evidence, an argument like the one presented here suggests that such a search would probably turn up nothing.
If somebody comes to me and says that he saw a ghost, I do not need to come up with a theory that explains what he thinks he saw without making mention of a ghost. All I have to do is point out how utterly bizarre it would be for ghosts to exist. From this I can infer that there probably is a logical explanation for what the agent thinks was a ghost, without mentioning ghosts.
That is, unless I was a character in a Hollywood script or book. If that were the case, then having a character come up to me claiming that he saw a ghost should be taken as good evidence that he really has seen a ghost – because the author has almost certainly written ghosts into the world where the story takes place. However, since I am not a character in a work of fiction (as far as I can determine), I discount ghost claims, without going to the effort of providing an alternative explanation for every ghost claim ever made.
The same is true of my skepticism of ‘motivational belief’ claims.
4 comments:
See my post Explaining Beliefs (replace 'moral' with 'worthwhile', to avoid terminological confusion). Note especially that I don't think our evaluative beliefs are about any special "entities" out in the world that are waiting to be "perceived". (That's just silly.)
A general complaint: you seem to be treating our motivations as given, which completely neglects the fact that people engage in practical reasoning, from which their decision of 'what to do' is an outcome. We are not straightforwardly driven by our antecedent cravings (e.g. for sex, sugar, etc.) - that would be to deny all agency. So this brings us back to my old complaint that you're conflating drives and evaluations.
As I understand it practical reasoning in BDI theory is a two stage process (although in reality they are most likely intermingled). These are first the deliberative generation of intention - what to do - and then a means-end analysis - the formulation of plan to fulfill that intention - how to it.
1. Deliberation: Generation of
options (desires), filter and then select an option - this is equivalent to discovering the more and strongest of one's desires given the current state of affairs represented as your initial beliefs - all to generate an intention or desire-as-end.
2: Means-Ends Analysis. The examination of what beliefs are relevant, The formulation of a plan as the relevant combination of beliefs and desires as desire-as-means to fulfil the desire-as-ends.
In both phases, reason is applied to checking the soundness, relevance and completeness of beliefs and the validity of conclusions derived.
We differentiate between theoretical and practical reasoning as practical reasoning does more than this. This 'more' is the evaluation of desires qua desires - which is not the same type of process at all.
This brings in the idea of agency. How are the more and strongest of desires determined? Well determined they are, we are fully-caused, there is no contra-causal free will, so the label of agency just is the name of this evaluative process. The process is neither not rational or irrational but arational since the evaluation requires identifying of the various weights of the desires and making sure none are missed and then strongest one (or set) wins and becomes the intention. Now weights can be changed due to missing or mistaken beliefs but that is it. Specifically one cannot change the weights directly by reasoning, this is what I mean by arational here.
For our purposes here, agency just is to perform the evaluative process and then carry out the resulting intentions. There is nothing more to agency. We are not just driven to fulfil our cravings because there are conflicting and incommensurate desires hence requiring the deliberative and means-end phases prior to intentional action. But that is all there is to agency here.
I am afraid that there is no 'two-stage process' in BDI theory.
If we were required to deliberate on our desires, then our action would be grounded on beliefs about our desires, rather than the desires themselves.
Desires motivate action regardless of our beliefs about them or even awareness of them. Note that an animal also acts so as to fulfill its desires given its beliefs, without even having the capacity to have beliefs about its desires.
One of the aspects of desire utilitarianism is that I do not have humans as being so far removed from our animal ancestors. Our beliefs and desires can take a greater variety of propositions as objects, but they do not function any differently.
In fact, if you look at it, deliberative actions make up a small subset of our normal actions. When I sit down and write a post, there is some deliberation behind writing it and picking a topic. However, I do not sit with each individual letter, and each individual sentence, and contemplate ends and means. Theories that have us put too much effort into each act can be rejected because each act simply does not take all that much effort.
Anyway, do no more need to be aware of the desires that are motivating our action than we need to be aware of the forces that act on our physical body. Those forces will push or pull us whether we are paying attention to them or not. The only thing we deliberate over is the best course of action to use in getting where we want. Even here, the cost of thinking is such that we will deliberate only about the few, most important items. For minute-to-minute operations, we will let the brain work on autopilot. Micromanaging one's life is grossly inefficient.
The rest of the post seems accurate. I simply wanted to point out that desires motivate our actions, not beliefs-about-desires.
As for the means-ends phase, additional energy is devoted to this only to the degree that the investment is seen as worthwhile. A large percentage of the time, these means-ends deliberations are almost instantaneous. However, when particularly strong desires are seen to be in potentially in conflict, the mind devotes more energy into ensuring the fulfillment of the more and stronger desires. It is now time to think things over more carefully.
Even here, it is desires that motivate, not beliefs-about-desires. So, even here, deliberation selects the means of action, but 'feeling' or 'emotion' identifies the various ends.
Hi Alonzo.
What confuses me is that you often make reference to BDI theory, yet most of the time talk more as if using Frege-Russell proposition attitude philosophical psychology.
The two-stage process I mentioned is a formal distinction and one used to implement BDI agents.
The deliberative phase there means the identification and selection of outcomes(desires) and fixing on outcome - forming an intention - according to some evaluative algorithm.
Here I am thinking as ranking weights of desires, which we, most of the time do automatically. We only really need to deliberate over - in this sense - when there is a clash of desires and similar dilemmas.
And then one can examine one's existing belief set and deal with inconsistent, incoherent and incomplete beleifs motivated by the desire-as-means to resolve the dilemma. This desire-as-means helps simplify the desire-as-ends selection process via eliminating erroneous, minimising mistaken, removing redundant and adding absent beliefs. All these are to solely obtain more realistic weights to rank the more and stronger of our desires. That is one gets clearer about how one's occurent desires apply in this state of affairs.
The means-end analysis is reasoning stage, identifying and selecting relevant beliefs to formulate a plan to fulfil the desire-as-end - intention.
Of course the above is a formal view on things and in reality we just do it automatically most of the time and the two stage co-occur as I said in my previous comment.
I was not arguing for anything other than desires to motivate action. And indeed, like you, (although for other reasons), am waiting for a good argument against this position.
Post a Comment