Thursday, October 25, 2007

Agency and the Will

Here's a simple model of human behaviour: we have beliefs and desires, and we act so as to fulfill our most and strongest desires given our beliefs. (I think Alonzo Fyfe holds something along these lines.) On this view, call it 'BDI', our intentions (the psychological precursors to action) are wholly determined by our prior states of belief and desire. But if that were so, there would be no need for practical reasoning or deliberation. The mechanism for converting beliefs and desires into intentions might as well be sub-personal, like an automatic reflex. Instead, the phenomena suggest that there is a further element to agency, not reducible to beliefs and desires, which we may call the 'will'. (N.B. It may be reducible to some other aspect of neurological function; I do not claim it is non-physical.)

Start with theoretical (epistemic) reasoning. Does anyone think that our conclusions (new beliefs) are wholly determined by our prior states of belief? They're a huge factor, no doubt; what we find plausible will depend on what we already accept as true. But what new conclusions I draw - if any - will also depend on how much attention I pay to various reasons, how carefully I consider the issue, and so (probably) what I had for breakfast, among other things. There is room here to identify a causally-embedded 'will', which weighs and assesses the various arguments and reasons that fall under the spotlight of its attention. The way it functions is presumably determined by the totality of my brain states, but not - I think - my beliefs alone.

Similarly with practical reasoning. Desires may now enter the picture, but they seem to make little essential difference. Much still depends, for example, on which desires (or other practical reasons) we attend to, and how we will weigh and assess them -- it is not predetermined by these states alone. There is a distinct psychological faculty in play here.

(This also explains how we can come to bad conclusions, failing to do or believe what we have most internal reason for. Sometimes we just overlook things.)

6 comments:

  1. I think often times we give ourselves much more credit for making decisions than we actually deserve. Much of our reasoning is post hoc, and our decision-making is based on impulses which we little understand about ourselves, if we are even aware of them at all.

    ReplyDelete
  2. I agree with mathew.

    In fact, one of my friends told me of a study where they demonstrated that it was generally not possible to think through most actions in the time that we require in order to make the decisions - ie that reasoning must be post hoc regardless of self reporting (the issue was that the signal could not travel fast enough around the brain).

    That aside from all the other reasons for coming to that conclusion.

    ReplyDelete
  3. Why does the BDI model make reasoning unnecessary? There's a lot of literature arguing to the contrary: even on the most basic BDI model, reasoning is useful in arranging our desires, in accordance with our beliefs, in such a way that they do not conflict with each other.

    ReplyDelete
  4. Roman - couldn't that be done just as well by automatic, sub-conscious processing?

    Others - sure, but reasoning can have a genuine impact on what we do, at least sometimes.

    ReplyDelete
  5. I think BDI is prone to start this sort of debate (eg noting that BDI doesn't fully explain your actions). Its a bit like how the assumption that animals dont feel results in thousands of amazing experiments proving that one at a time each a huge number of species do.

    ReplyDelete
  6. Richard - I'm not a huge fan of the Humean approach, but I don't think the BDI theory is quite so easy to dismiss: yes, automatic, sub-conscious processes could sort our desires, etc. But we have some fairly complex desires, particularly desires for the distant future, desires to make strangers happy, etc. Automatic processes are simply not as efficient as rational ones in weighing present desires against distant ones. But one could argue that the rational processes simply widen our capacity to satisfy our desires and allow us to widen the range of beliefs; they do not show that we don't simply act on our strongest desires in light of the beliefs we have.

    Here's an example: I see a banana in a tree and want it. Rational deliberation allows me to come up with a way of getting to that banana (for example: by building a ladder). This is immediate means-end reasoning. But I might also--because I am rational--think that it would take me too long to get the banana, and really I should hurry to get to class instead, because that satisfies all sorts of other desires. This strikes me, at least, as a BDI theory that shows the usefulness of rationality: rationality is not there to fundamentally change the theory by adding a different source of motivation; rather, it enables us to identify and satisfy our strongest desires more efficiently than automatic processes would (i.e., both to yield satisfaction of deeper desires, and to increase the likelihood of satisfying desires).

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.