What Behaviour is About: ascribing intentionality to animals
Animal behaviour can appear strikingly intelligent and purposive at times, but what lies beneath this appearance? What are the reasons behind an animal’s actions? We commonly explain human behaviour by way of intentional mental states, which represent the world as being a certain way. Humans act so as to achieve their goals. Can the same be said of non-human animals? Do they conceive of the world and act purposefully to achieve their goals, or are they mindless stimulus-response automatons? How can we tell? After clarifying what intentionality involves, I will examine experimental methods that allow us to make inferences about animal minds on the basis of their behaviour.
I will take means-ends reasoning to be the cornerstone of practical rationality. Thus the hallmark of agency is recognizing the link between some goal G and the behaviour B apt to achieve it. Behaviour can be goal-achieving even in the absence of such recognition. For example, ants remove decaying corpses from their nest, which achieves the biological goal of good hygiene. But the ants themselves have no conception of this goal. They merely respond to oleic acid, a chemical produced in the decay process. Bits of paper or even live nest-mates are similarly disposed of, if daubed with the chemical. In this case, the ant is not the agent – it does not recognize the purpose of its own behaviour. Nevertheless, there is a genuine sense in which the ant behaviour is about removing dead nest-mates. After all, it’s no coincidence that ants remove corpses from their nests. Evolution selected for a behaviour to fill this functional role. The ‘agent’ here is natural selection, which ‘chose’ the behaviour precisely because it fulfils adaptive goals.
Intentional explanations, like other causal explanations, can be illuminated via their counterfactual implications. Intentional content can be fixed upon by determining what properties are ‘tracked’ by the organism’s behaviour through relevant possible worlds. Whereas an ‘actual-sequence’ explanation details how a particular event actually came about, a ‘robust-process’ explanation situates events in modal space. It tells us what is common to all the close possible worlds in which the event occurs – explaining the occurrence of this act type rather than token. Taking a broad view, we may note that if oleic acid had not been a reliable corpse-indicator, ants would likely have evolved sensitivity to some other appropriate cue instead. Reliable detection and removal of decaying corpses is what remains constant across the various evolutionary scenarios we might imagine. So it isn’t merely metaphorical to say that this biological goal is the reason for the ant behaviour – it really does offer a robust explanation, at the phylogenetic level. We can thus use it to fix intentional content (of a sort): when ants detect oleic acid, the function of this registration is to detect corpses; that is what it is about. But this ‘biosemantic’ theory offers only a very weak form of intentionality, which doesn’t even require the organism to have a mind. So let’s move on now to considering intentional mental states.
First we must indicate existence conditions for mental states. I will adopt Sterelny’s suggestion that we think of cognition in terms of “flow of control”. Mindless organisms exhibit a “straight-through” flow, with particular stimuli provoking invariant responses, unmediated by internal processing or feedback. Mental ascriptions are only warranted when organisms exhibit behavioural plasticity. Applying this rule to an organism’s registration of information, we may conclude that single-cued tracking does not involve representing the world. Genuine possession of a concept X requires that one “abstracts away from the perceptual features that enable one to identify X’s.” Single cued tracking leaves no room for identification of error – the organism has no other basis on which to identify the X-property. No matter how the acid-coated live ant might struggle as it is dragged from the nest, the other ants have no possible basis on which to revise their conception of the situation, because their ‘removal behaviour’ is triggered solely by the oleic acid cue. Indeed, given the inflexibility of their behaviour – the tight connection between stimulus and response – we have no reason to attribute mentality here at all.
So, to represent a property X, the organism must be capable of robust, multiple-cue tracking of this property. Otherwise it is incapable of identifying an X as such, distinct from the perceptual cue by which it detects it. This guideline may be fleshed out by appeal to the notion of a unified explanation. If we find that a particular behaviour occurs in a range of situations, we should want to unify these events under a single explanation. To achieve this, we look for something that all the situations have in common. In the ant case we have oleic acid. But other times there may be “no one sensory kind of stimulus” common to all the situations. As such, we may need to look for higher-level explanations. Perhaps warning cries are elicited in situations across which the ‘lowest common denominator’ is that they contain evidence that a predator is nearby. If we can find no simpler way to unify these disparate environments, then this is probably how the cases are united for the animal itself.
So far we have concentrated on animals’ discriminatory powers. Robust, multiple-cue tracking gives rise to representations, the content of which can be inferred from what is common to all the eliciting contexts. But this leaves open the crucial question of how the registered information contributes to the animal’s behaviour.
Genuinely purposive behaviour requires separation of the ‘indicative’ and ‘imperative’ functions of the intentional system. This binary distinction lies at the core of folk psychology. Beliefs represent how the world is; desires, how we want it to be. Either alone is impotent: beliefs are motivationally inert; desires, blind. Our actions arise through utilizing information in pursuit of our goals. But many animals have no such distinction, instead showing a strict connection between information-registration and action. The ant does not distinguish between the indicative ‘here is a corpse’ and the imperative ‘get rid of it!’. This leaves no room for practical reasoning – which, recall, we are taking to be the hallmark of agency. An animal cannot think about how to achieve its goals unless it first represents these goals as distinct from the means by which it may achieve them. Fully-fledged beliefs are ‘decoupled’ from particular actions, providing a general-purpose “fuel for success” that can potentially influence a wide range of different behaviours.
We have now developed our theory to the point where it can be put into practice. Two upshots from the above discussion bear highlighting. First, beliefs should be sensitive to a range of evidence, unlike brittle single-cued tracking systems. Thus, in addition to the ‘unified explanation’ strategy described earlier, we might also conduct experiments based on the idea that intelligent animals might learn to give less weight to cues they’ve recently found to be unreliable. If, in robustly tracking some property X, an animal can pick up on some of its own discrimination errors and thereby learn to better discriminate Xs, then we can be fairly confident in ascribing the concept of X to the animal. Suppose the ants had become suspicious of oleic acid after experimenters started splashing it over their live nest-mates. That is, the next time the ants came across a corpse, they instead checked for other signs of death before engaging in ‘removal behaviour’. Such flexibility would warrant attributing to the ants the concept of death.
The second point to note is that beliefs should be utilized in pursuit of a wide range of goals, rather than being tightly coupled to some particular action only. So we might test whether animals can make use of previously acquired information in novel situations. Thus Allen and Hauser suggest that, if an animal A has previously recognized animal B as dead, then when A is later presented with a stimulus that would normally elicit a response directed at B, A should modify its response in light of its knowledge. More generally, if an animal’s behaviour is sufficiently flexible and responsive to past experience, this constitutes evidence of representation.
The above provides a groundwork for ascribing beliefs. But desires too have their interpretative difficulties. Does the monkey give its warning call because it wants to inform the other monkeys of the leopard, or because it wants them to run into the trees, or from sheer instinct? Again, behavioural plasticity is key – first to establishing the voluntary status of a behaviour, and then its intentional content. We can determine the content by setting up situations in which the various hypothesized goals are in conflict, and seeing which one the animal most reliably produces. For example, if we found that a monkey failed to give a warning call when it noticed that the only endangered animal was a personal rival, then this would help illuminate the animal’s motivation in producing warning calls. It is also worth noting that the earlier suggestions regarding concept-fixation can similarly inform our goal-ascriptions, since one cannot represent a goal about X as such unless one has the concept of X.
As a general rule, cognition is reflected in behavioural plasticity. We can test this by varying the stimuli presented to an animal, and observing its subsequent responses. If the range of behaviour can be simply accounted for in terms of responses to sensory kinds of stimuli, then higher-level intentional explanations are unnecessary. On the other hand, if the animal engages in robust tracking which (1) can most simply be unified under an intentional explanation; and (2) is ‘decoupled’ and used to inform a wide range of behaviours; then we have grounds to consider it an intentional agent. Finally, we can test rival ascriptions by putting them into competition, and seeing which goal the animal seeks to realize. Thus a plausible methodology has been outlined for what might otherwise have seemed an impossible task: namely, making warranted inferences about animal minds based solely on their behaviour.
Allen, C. (1999) ‘Animal Concepts Revisited: The Use of Self Monitoring as an Empirical Approach’ http://grimpeur.tamu.edu/~colin/Papers/erk.html
Allen, C. & Hauser, M. (1991) ‘Concept attribution in non-human animals’ Philosophy of Science 58.
Bennett, J. (1983) ‘Cognitive ethology: Theory or poetry?’ Behavioral and Brain Sciences 6.
Bennett, J. (1991) ‘How to Read Minds in Behaviour’ in A. Whiten (ed.), Natural Theories of Mind. Oxford: B. Blackwell.
Dennett, D. (1987) The Intentional Stance. Cambridge, Mass.: MIT Press.
Heyes, C. & Dickinson, A. (1990) ‘The intentionality of animal action’ Mind & Language 5.
Millikan, R. (1989) ‘Biosemantics’ Journal of Philosophy 86.
Roitblat, H. (1983) ‘Intentions and adaptations’ Behavioral and Brain Sciences 6.
Sterelny, K. (2001) ‘Basic Minds’ The Evolution of Agency and Other Essays. Cambridge: Cambridge University Press.
Sterelny, K. (2003) Thought in a Hostile World. Malden, MA: Blackwell.
Terrace, H. (1983) ‘Nonhuman intentional systems’ Behavioral and Brain Sciences 6.
 Throughout the essay I use the term ‘intentionality’ in its technical sense of ‘aboutness’. Intentional states need not be voluntary – comprehending this sentence will cause you to think about elephants, whether you want to or not. I will use the term ‘purposive’ in place of the common sense of ‘intentional’.
 Allen & Hauser, p.229.
 Dennett, pp.259, 299.
 Heyes & Dickinson, p.88.
 Sterelny, ‘Basic Minds’, p.207.
 The scope of the counterfactual is crucial here. Of course, on the individual level (and given their actual evolutionary history) ant ‘removal behaviour’ merely tracks oleic acid, not corpses. That is why we cannot attribute such rich intentional content to the individual ant, as we will see in the next paragraph.
 Cf. Millikan, p.290. I’m speaking rather loosely here. It might be more appropriate to say that what we are fixing here is the intentional content of the ant’s genetic information. But this provides an intentional explanation of the resulting behaviour in much the same way as mental explanations do, if on a somewhat different scale – cf. note 6 above.
 Sterelny, ‘Basic Minds’, p.206. I don’t mean to take this as a strict analysis of mentality. All we need for present purposes is a rough indication.
 Ibid., pp.210-211. Note that ‘single-cued tracking’ is when an organism detects an object or property (e.g. corpses) via a single cue only (e.g. oleic acid).
 Allen & Hauser, p.227.
 Bennett, ‘How to Read Minds in Behaviour’, p.102.
 Ibid. Instrumentalists about intentional explanation need not commit themselves to such a strong claim, however. In ‘Cognitive ethology: Theory or poetry?’, p.356, Bennett suggests that “if a teleological generalization does work for us – giving us classifications, comparisons, contrasts, patterns of prediction that mechanism does not easily provide – then that justifies us in employing it.”
 Millikan, pp.295-296.
 Sterelny, Thought in a Hostile World, pp.29-31.
 Allen & Hauser, p.232.
 Allen, ‘Animal Concepts Revisited’. But see note 17 below.
 Or some related concept, at least. Fixing the details is not so easy – perhaps they are instead looking for decay, or hygiene risks, or immobile ants, etc. Nevertheless, the problem should be empirically tractable using the methodologies previously outlined. We can rule out various proposals for X by noting that the organism must be capable of robustly discriminating Xs from non-Xs, and ideally demonstrate some capacity for learning to improve their discriminatory skills. Experimenting with enough diverse stimuli should (eventually) indicate what environmental properties the organism’s perceptual mechanisms are latching on to (i.e. ‘tracking’).
 Allen & Hauser, p. 232. They go on to apply this template to the specific case of infant distress calls in vervet monkeys, see p.233.
 Cf. Terrace, p.379: “To show that an organism wants to do X, it is necessary to show that there are comparable circumstances in which it elects not to do X.” See also Roitblat, p.375, who distinguishes “positive optionality” – that is, performing an action in the absence of the normal eliciting stimuli – from “negative optionality” – neglecting to perform the action in face of the normal stimuli.
 Terrace, p.379.