Monday, August 22, 2005

Implicit beliefs

Sometimes we gain information (of a sort) without having conscious access to it. This is especially common with procedural knowledge, or "know how". You know how to tie your shoelaces, or ride a bike, but it's very difficult to explicate in words how you do it. Further, psychologists have shown that our conscious beliefs about how we do it are often downright mistaken! Still, there is a sense in which the knowledge of how to do it must be stored inside us somehow -- since we are able to act successfully. What is the intentional status of this 'tacit' or 'implicit' knowledge? Is it representational? Does it exhibit 'aboutness'?

There seems a sense in which my implicit knowledge of how to ride a bike is clearly about riding a bike. That's how I obtained the skill, and also what I use it for. The knowledge has a particular function, which we might consider to be a kind of intentionality.

But perhaps it's a category error to hold skills to be repositories of information. They're not genuine representations -- "know how" is distinct from "knowledge that". Procedural knowledge lacks the 'aspectuality' of intentional states. That is, the skill doesn't involve seeing something as a bike. The skill is purely extensional -- it will apply to anything that has the relevant (bike-like) properties, no matter how I conceive of it. Moreover, the skill might generalize, enabling me to balance better in a wide range of situations, let us suppose. Would that mean that the skill is really 'about' balancing?

These problems suggest that procedural knowledge is not properly intentional. But this seems to clash with instrumentalist conceptions of intentionality. Instrumentalists conceive of mental states as behavioural dispositions: I believe that p if I generally act in ways that would achieve my goals if p were true. But we are disposed to act on our procedural knowledge. To say that I know how to ride a bike is just to say that, when attempting to riding a bike, I will tend to behave in ways that are apt to be successful (e.g. shifting my weight appropriately to retain balance, and so forth).

Suppose the following two facts are true:
1) Cyclists retain balance by shifting their weight.
2) When asked how they retain balance, most cyclists reply, mistakenly, that they twist the handlebars.

Now, what should we say cyclists know about retaining balance? Do they know how to retain balance, or not? In the implicit, behaviour-dispositional sense, sure they do. But they lack explicit awareness of this knowledge/skill, and indeed have false beliefs about it on the conscious level. It seems odd to say that the cyclists believe that shifting their weight will retain balance, since they avowedly deny having any such belief. But I guess one could always respond that we don't have perfect knowledge of our own minds. The cyclists have a false belief about what they believe. Their false belief is a second-order one (it's about another belief), they don't actually have a false belief about how to retain balance.

I guess that makes sense. Though I'm still worried about the apparent lack of aspectuality, as mentioned above. That seems a problem for dispositionalist accounts of belief.

On the other hand, acquired automatic skills do seem to exhibit a sort of purposive intelligence. Does the wicketkeeper intend to make his reflex catch? (Note that common sense 'intentions' involve philosophical 'intentionality', but not necessarily vice versa.) I'm not sure if this question has any clear answer. He's surely responsible for it, and warrants praise, etc. But perhaps he deserves this because of his earlier (voluntary and purposeful) training, through which he developed good reflexes, rather than for the reflex itself. It's not as if he engaged in practical reasoning about how to catch the ball. He just did it. So his present mind lacks intentionality here, because it wasn't involved in connecting the behaviour to its goal. The intentionality is instead found in the mind of his past-self, who undertook the training.

We can now distinguish three levels of intentionality in goal-achieving behaviour:

1) The behaviour is purely instinctive. The animal is unaware of the goal (possibly set by natural selection) that its behaviour is directed towards.

2) The behaviour is automatic/reflexive ('instinctive' only in the sense that its proximal cause is thoughtless), but was acquired through past learning which was itself goal-directed.

3) The behaviour is goal-directed: the animal has an internal representation of the goal G, and voluntarily (i.e. non-automatically) produces behaviour B as a means to achieve that goal.


  1. I think the problem you pose is a false one, and arises because of Philosophy's 2500-year-old emphasis on belief, rather than on action. It has taken AI -- a practical, in-this-world discipline -- to shift this emphasis.

    If we assume we have a representation of actions, then there is no difficulty in understanding "know-how" as a plan (ie, a sequence of actions), possibly conditional (eg, if the bike moves to one side or the other, execute the action of moving my weight to the other side). This is conceptually no different to representing beliefs as propositions and proofs as chains of reasoning.

    Of course, we humans may or may not hold such a representation of actions and plans in our heads. Even if we do, we may not be aware that we do. Even if we do and we are aware that we do, we may or may not be able to articulate them to others. But there is no difficulty in reconstructing our actions with such a scheme (action-representations + plans).

    This is the basis of large parts of AI.

  2. Yeah, that could be a more helpful way of looking at it. I'll need to give it some thought. Thanks.

  3. It seems your three choices are a rather poor treatment of the unconcious. (Not surprising since it is a bias that afflects a lot of philosophy) The only "unconscious" knowledge you allow are instinctive and habitual. But why not an unconscious that functions in a fashion rather akin to what we perceive our conscious working?

  4. So would that count as goal-directed, still? I suppose if it were based on internal representations and such. Is it automatic? You say it isn't just habitual, so maybe it's flexible enough for us to call it 'voluntary' in a sense (even if it isn't our conscious self that does the choosing). In that case, it would fall under my third option: full-blown intentionality.

    If you'd prefer to create a fourth level, I'd be interested to hear the details of how you would specify it.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)