Wednesday, October 05, 2005

Theory of Mind in Animals

Supposing that some nonhuman animals can think, the question then arises: can they think about thought itself? Given that they have minds, do they recognize this fact? It is by no means guaranteed, for it is possible to possess a property unknowingly. Perhaps animals are limited to thinking about the physical world, so that, whilst capable of simple reasoning about their own behaviour and others’, they lack any conception or awareness of the mental states that underlie this behaviour. The question thus becomes whether animals possess any mentalistic concepts, such as belief or desire, which they attribute to themselves or others, as part of a general ‘theory of mind’. We must be wary not to take a positive answer for granted. Mentalistic attributions come so naturally to human observers that we may be predisposed to assume that other animals share this ability. As Byrne puts it: “we, as normal humans, find it hard to imagine not being able to understand another’s mental viewpoint.”[1] We are masters of the intentional stance,[2] automatically interpreting behaviour as goal-directed, and using this to make inferences about the agent’s beliefs and desires. But despite how easy this seems to us, it is actually quite a cognitive feat, and we cannot assume that evolution will have seen fit to equip other animals with this same ability. Instead, we must look to the empirical evidence for answers.

According to Gallup’s famous “mark test”, you can assess whether an animal is self-aware by whether it can learn to recognize itself in a mirror. In particular, if you secretly mark the animal’s forehead, and it sees this mark in the mirror and responds by touching its own forehead, then this suggests that the animal recognizes itself in the mirror. But this does nothing to show that the animal has knowledge of its own mind. After all, all creatures have some means of distinguishing their own body from those of other animals, and we would not be surprised if an animal touched a mark directly visible on its paw.[3] But a mirror is simply a perceptual tool of a certain kind, allowing creatures to see things that they otherwise might not. In light of this, why should the implications of mirror self-recognition be so different in kind? Granted, it does demonstrate a form of intelligence, or perceptual resourcefulness, for an animal to learn to make use of a mirror for self-exploration. But the mirror test is largely a red-herring so far as research into animal theory of mind is concerned. Passing the mirror test is not sufficient for self-awareness, as explained above. Creatures need not have mastery of mentalistic concepts to recognize themselves. They might identify their body without thereby realizing that the body contains a mind. And neither does the test identify a necessary precondition for self-awareness. In testing visual perceptual flexibility, it disadvantages creatures which favour other sensory modalities such as hearing or olfaction. And an animal might, plausibly, be capable of thinking about its own mental states whilst lacking the perceptual skills required to recognize its body via indirect means. What we really need to test is conceptual, not perceptual, skills.

One promising line of research investigates animals’ self-monitoring of their own mental states. For example, dolphins were trained to distinguish high from low pitched tones, and were rewarded for selecting the correct paddle. As one would expect, they often made mistakes when the tone was close to their discrimination threshold. Interestingly, when an ‘escape’ paddle was introduced – a safe fallback option with no reward or penalty – the dolphins would often select this option in response to tones near their discrimination threshold.[4] A fairly intuitive interpretation would see the dolphin’s “escape” response as signalling its uncertainty as to which of the two other options – “high” or “low” tone – is correct. This interpretation suggests that dolphins can reflect on their own mental states, assessing their subjective confidence in their own judgments, just as humans can. That is, it suggests that dolphins know their own minds.

This conclusion is not assured, however, for deflationary alternative interpretations of the escape response are possible. The metacognitive explanation is second order, i.e. the dolphin’s response expresses a judgment about its own mental states. But it could just as well be a first-order judgment that is solely directed at the external stimulus. Perhaps the dolphin judges that the tone is both high and low, and this ‘cognitive conflict’ – absent any personal recognition of the conflict – causes it to select the escape option.[5] Thus, while these experiments do at least suggest that dolphins have some functionally metacognitive capabilities, being sensitive to the reliability of their own mental states, this does not establish that they possess mentalistic concepts.[6]

But perhaps we are looking in the wrong place. Perhaps second-order thought is more open to experimental verification when directed at the thoughts of others, rather than oneself. Whilst first-personal metacognitive processes might be helpful for regulative purposes, they are not easily distinguished from sub-personal functional equivalents, and the extra information provided by meta-level knowledge of one’s own fallibility seems only moderately useful from an evolutionary perspective. By contrast, the benefit of attributing mental states to other animals is clearly immense, due to the predictive power it provides. Such capabilities are also more open to empirical investigation, as some third-personal mentalistic attributions are comparatively difficult to ‘deflate’.

Tactical deception is an appealing candidate behaviour for second-order intentional interpretation. These involve cases whereby an animal achieves their goal by causing another animal to obtain a false belief. Byrne describes an example wherein a young baboon named Paul found an adult, Mel, who had just dug up a corm, and subsequently fooled his mother into chasing his rival away:

“[Paul] looked around, seeing no other baboon, and screamed loudly. His mother, who was higher ranking than Mel, ran into view grunting aggressively and immediately pursued Mel. When they had both left the immediate area, Paul ate the corm.”[7]

Despite their appeal, such anecdotes provide unreliable evidence for intentional deception. Paul’s behaviour can be explained by his previously learning the correlation between screaming in certain circumstances and thereby obtaining the food reward. Or, more generously, we might even grant that he recognized that screaming would cause his mother to appear and chase off Mel. But neither of these interpretations require that Paul understands his mother has a mind, so we cannot be sure that his intention was to evoke her false belief.

More generally, we are faced with the problem of how to empirically distinguish first-order from second-order intentionality. Given that any case of tactical deception in animals will be instrumental to obtaining some behavioural output from the target of deception, we might plausibly hold that the deceiver merely intends to provoke this behaviour, rather than the false belief that underlies the behaviour.[8] The question then arises how the animal obtained the means-ends belief that his deceptive act would cause the desired behaviour. There seem two plausible candidates: either the animal has learnt from past experience, or else it has some rudimentary understanding of the causal processes underlying behaviour – i.e. a theory of mind – from which it can make successful predictions. Byrne claims that tactical deception has been observed in great ape populations that are under continuous observation, which allows us to rule out previous trial-and-error learning, and thus provides strong evidence for mentalism in these species.[9] Granted, it is possible for goal-achieving novel behaviours to arise through lucky coincidence, but as Dennett points out, once a broad enough range of examples start to pile up, “the claim that it is all just lucky coincidence… becomes the more extravagant hypothesis.”[10]

So far we’ve focused on general evidence, but it is also worth considering the specific abilities that a theory of mind might involve. The most fundamental component is perhaps an understanding of goal-directed behaviour, whereby one recognizes that other creatures are agents whose actions aim at achieving particular goals. Premack’s experiments suggest that chimpanzees are capable of inferring the goal of an actor’s behaviour – correctly selecting the photograph of a ‘solution’ to a videotaped actor’s ‘problems’ – when three year old humans cannot; though these results are controversial.[11] A more advanced step is to recognize that others can have different beliefs from oneself. Humans typically develop this ability at around the age of four, but it isn’t clear whether any other animals can do the same. For example, Cheney and Seyfarth have found that monkey alarm calls are insensitive to the knowledge or ignorance of their audience.[12] However, some chimps can learn to discriminate between trainers that were or were not present to perceive a critical event – a capacity related to recognition of others’ ignorance.[13] We may further ask whether animals are capable of tracking what others perceive. Apes can use the target’s bodily orientation to judge whether to send a visual signal, and some show evidence of understanding the specific importance of eyes in visual perception.[14] While some such results might merely show that apes can use direction of gaze as behavioural cue, language-trained chimpanzees responded correctly to the question “what’s that?” when asked for the first time without pointing, with the trainer instead merely looking at the target. Whiten suggests that this “reinforces the conclusion that chimpanzees do in fact see visual attention as ‘about’ something.”[15] This would seem to constitute a rudimentary form of second-order intentionality.

In sum, it isn’t entirely clear whether any non-human animals have an understanding of minds. There’s certainly scant evidence that they have the rich conceptual understanding that humans do. But some of the more advanced animals might at least exhibit important precursors to a theory of mind. Dolphin behaviour is sensitive to their own uncertainty, even if we cannot be sure that they are aware of this themselves. There is some evidence that apes have some of the core capabilities that amount to a theory of mind, including an understanding of others’ goal-directed behaviour, visual perception, and possibly ignorance. Anecdotes relating tactical deception are especially enticing, though open to deflationary interpretation if not found under controlled conditions. Further research is required to reinforce and build upon these initial successes – or expose their limitations.


Bibliography

Bennett, J. (1991) ‘How to Read Minds in Behaviour’ in A. Whiten (ed.), Natural Theories of Mind. Oxford: B. Blackwell.

Browne, D. (2004) ‘Do Dolphins Know Their Own Minds?’ Biology and Philosophy 19: 633-653.

Byrne, R. (1995) The Thinking Ape. Oxford: Oxford University Press.

Cheney, D. and Seyfarth, R. (1990) How Monkeys See the World. Chicago: University of Chicago Press.

Dennett, D. (1987) The Intentional Stance. Cambridge, Mass.: MIT Press.

Premack, D. (1988) ‘‘Does the chimpanzee have a theory of mind?’ revisited’ in R. Byrne and A. Whiten (eds.) Machiavellian Intelligence. Oxford: Oxford University Press.

Tomasello, M. and Call, J. (1997) Primate Cognition. New York: Oxford University Press.

Whiten, A. (1997) ‘The Machiavellian Mindreader’ in A. Whiten and R. Byrne (eds.) Machiavellian Intelligence II. Cambridge: Cambridge University Press.



[1] Byrne (1995), p.109.

[2] See Dennett (1987).

[3] Derek Browne, lectures; Tomasello & Call (1997), p.337.

[4] Browne (2004), p.641.

[5] Ibid, p.647.

[6] Ibid, p.651.

[7] Byrne (1995), p.124.

[8] Bennett (1991), p.104.

[9] Byrne (1995), p.133.

[10] Dennett (1987), p.251.

[11] Premack (1988), p.176. But cf. Tomasello & Call (1997), pp.319-322.

[12] Cheney and Seyfarth (1990), pp.219-222.

[13] Whiten (1997), p.161, describes experiments by Povenelli to this effect.

[14] Tomasello & Call (1997), p.340.

[15] Whiten (1997), p.165.

18 comments:

  1. > In sum, it isn’t entirely clear whether any non-human animals have an understanding of minds.

    We seem very reluctant to come to the conclusion that they can understand minds. we go over some fairly good evidence that they can (thus rendering any evidence that they dont in certain situations irrelevant) and then conclude that it isn't proven (whic his valid from a scientific point since you can never "prove this" but then you just sould like a creationist).

    One of the problems is that there is always a stimulus response interpretation to an action because there is in reality only stimulus responses. Theory of mind is a special case that is complex enough for us to give it another name (there is nothing "magical" about it).

    This is similar to the debate regarding pain - sure animals LOOK like they feel pain and act like they feel it, but do they really? (with this sort of sceptisism it is a wonder science ever progressed)

    I guess the first question is what exactly is theory of mind - is it the posession of some miror neurons? I have some issues with it philopsohically to since I either dont know that us humans do have theory of mind OR it seems to be a bit trivial (depending on how i define it). There is a tendancy to conceptualize it as "stuff that makes us human" or "those contemplative periods".

    I also wonder how genetically complicated IS theory of mind. if it was "mirror neurons" maybe we could quantify that.

    There is a danger of a poorly defined concept retreating as you make more experiments. It seems to be a bit of a problem to be testing for somthing that we dont even know exactly what it is.

    I also note that humans have more cues to feelings than many animals. so they have advantages recognising/learning what each other want and advantages learing what humans want because they are human and have good eyesight.

    Anyway for example I wil do a banana experiment on myself I know my flatmate wants the banana - if they were here they would get it, but Im hungry. I may eat the banana or I might not being concerned about what might hapen as a concequence in my brain
    banana -> eat[positive] -> (some time later) argument [negative]
    possibly eat it (proving nothing)
    possibly I dont (proving i am either as smart as pavlov's dog or possibly somthing more complex)
    Now I say - I didnt eat it because I knew they would be upset (actually thats probably a lie the real reason why I wouldnt eat it is to avoid either the argument or from habit because this situation was similar enough to one where there might be an argument)
    Still having trouble making myself smarter than a dog i havent even started trying to be as smart as chimp or a dolphin.

    ReplyDelete
  2. Hey, what's a mind?
    If we take another tack- are animals concious- of course we'd say yes. Ok then- are they self-concious? The evidence seems to sometimes indicate it- but how would we know?
    Probably the best way would be to forget about observation, and ask questions. If we had a bablefish- or software that analysed- and translated- dolphin into english, say- wouldn't we solve this fairly quickly?
    Would self-conciousness be pretty close to what you call a "theory of the mind"? No in some respects; yes in others. It's not- as pointed out above- easy to get an answer to your question unless you're very clear about what the question is.

    ReplyDelete
  3. Following on from Rob's comment, I think in the post you have confused "Having a theory of mind" with "Having the theory of mind which most humans have". It could well be that animals have a theory of mind which is of a qualitatively different nature to the BDI (Beliefs, Desires, Intentions) model of mind which most of us have. Not being an animal myself, it is hard for me to conceive what such a theory could be. But, as been said on this blog before, a map is not the territory, and you seem to be confusing the two.

    You've also overlooked the fact that the mirror test is biased in favour of a sense, sight, which is strong in humans, but relatively weak in many other animals (eg, dogs). What test would a dog conduct on humans, were dogs bothered to find out if we have a theory of mind? "Can humans distinguish their own scent from that of another human's scent?" Most of us would fail this test.

    ReplyDelete
  4. Kofi, you didn't read my post very carefully -- I actually did criticize the mirror test on the basis you mention, though this becomes relatively unimportant in light of my other, more fundamental, criticisms.

    As for your first point, I've written about whether animals have an understanding of mental states. Mental states are things like beliefs and desires. So the question is just whether animals have an understanding of things like beliefs and desires. If you come up with a model that doesn't involve things like beliefs and desires, then whatever it is, it isn't a theory of mind. Maybe animals have a theory of something other than mind. But that isn't the question I was addressing.

    Rob, I think questions of (self-)consciousness are even less clear than the questions I was addressing here. Your suggestion that we "forget about observation" seems especially questionable. How else are we going to find answers here if not through empirical science? We certainly can't work out a priori what capacities other animals have. It's an empirical question. So if our answer isn't based on evidence, then it has no basis at all -- what you're effectively recommending is that we all just start guessing instead.

    As for the technological quick-fix, there could be no "translation device" until its inventors had already solved these and related problems. It's rather like suggesting to a moral philosopher that his job would be much easier if we had a perfectly ethical robot who could tell us what the right thing to do in all situations is. I mean, yes, it obviously would solve the problems "fairly quickly". But getting this magical machine is an even bigger problem, so it's not a very helpful suggestion.

    ReplyDelete
  5. Apologies, Richard, you did too.

    But you've missed my other point in your reply. A theory of mind could easily have other concepts in it besides beliefs and desires -- same territory, different map. If you insist a theory of mind has to be BDI, then you're taking a very anthropocentric view of mind, and one contrary to current work in AI. The BDI model of mind is also one not found in all human cultures, so are you saying that only modern westerners have a theory of mind?

    BTW -- Do you realize your post displays many of your HTML tags, instead of executing them (making it hard to read). (at least when reading in IE under Win-XP).

    ReplyDelete
  6. as to being anthropocentric there is a possibility that if you have enough control over the definition of the word you can come to any conclusion you want the debate to come to (eg newspeak). For example if you define beliefs and desires to be exactly as held by chimps you will of course come to the conclusion that humans dont have the and are thus inferior.
    having said that do they have any experiments confirming or suggesting this "The BDI model of mind is also one not found in all human cultures" because that is quite a significant point to decide where richard wants to lead the argument and how he wants to define the definitions.

    As toa translation device I think if you created suffcient feedback to the right parts of the brain you would go a long way to creating a translation device and of course allowing he animals to learn a language. I suggest the animals would become much more clever at the same time as becoming more able to communicate.

    ReplyDelete
  7. Genius -- People in many non-Western cultures attribute the actions of others to the activities of non-material entities, such as spirits or witches, rather than to the beliefs, desires or intentions of those others. Other cultures (eg, some in the Pacific) acccount for the actions of others in terms of environmental causes, not internal mental states.

    For an example, see this account of religious beliefs of the maShona people of Zimbabwe:

    Michael F. Bourdillon [1976]: "The Shona Peoples: An Ethnography of the Contemporary Shona, with Special Reference to Their Religion." Gweru, Zimbabwe: Mambo Press.


    For a review of the anthropological evidence, see:

    Angeline Lillard [1998]: "Ethnopsychologies: Cultural variations in theories of mind." Psychological Bulletin, 123: 3-32.

    ReplyDelete
  8. If they believe that other people's actions are caused by spooks or the environment, rather than the agent's own mental states, then yes, it does seem quite clear that they lack any understanding of the mind. On the other hand, I'm extremely sceptical of any claims that only westerners have the capacity to attribute mental states to each other.

    Now, it's not that I'm "missing" your point -- rather, I'm refusing to grant it. I use the word "mind" to refer to the seat of mentality. If you're not talking about mental states, then you're not talking about minds. You're mapping out some different territory altogether.

    (I haven't noticed those technical problems myself. Could you email me with the details? Cheers.)

    ReplyDelete
  9. Richard -- I'm not claiming "that only westerners have the capacity to attribute mental states to each other". I'm claiming that the BDI model of a theory of mind is just one of many possible models of a theory of mind, ie, one map for this territory among many possible maps. You may refuse to grant this, if you wish. But your ideas would not last 5 minutes if presented to an anthropology department.

    ReplyDelete
  10. Just as well I'm doing philosophy, not anthropology, then ;)

    Seriously though, my essay was about the attribution of mental states. You haven't explained in what respect I have failed to tackle the subject successfully. All you've done is speculate about the vague possibility of some other models of mind, and provided examples of non-mentalistic explanations of behaviour (which quite obviously fail to constitute a theory of mind, as opposed to a "theory of spooky spirits" or whatever). Oh, yes, and you've made repeated appeals to your map/territory analogy without actually establishing that there's anything in the present context which warrants it. It's not as if I've conflated a representation with the thing represented, which your analogy suggests.

    Basically, if you want to continue the discussion, I'd like to hear something a bit more specific. Maybe you could pick a particular example from my essay, and show how it rests on illicit assumptions. Because at the moment, your criticisms just seem entirely baseless.

    ReplyDelete
  11. Richard -- I'm not attacking you, just your argument! I think your argument is flawed, since it assumes there is only one formulation of a theory of mind (one based on a BDI model), and this is simply not so. This statement of mine *is* a specific criticism of your argument, and is not baseless. For justification of my criticism, including descriptions of other formulations of theories of mind, I suggest you look at the Lillard paper I cited.

    ReplyDelete
  12. Don't worry, I didn't take it personally. It's just a bit frustrating when you don't offer a clear argument. (Merely referencing an anthropology journal which I don't have time to read certainly doesn't count.) For example, you backed up your claim that there are other types of theory of mind by offering examples which were obviously not mentalistic explanations at all! I just really don't know what you're thinking here.

    The question I tackled was whether animals are capable of understanding mentality, i.e. having thoughts about thoughts. That's the question, and I addressed it. I discussed issues such as (1) evidence for metacognitive recognition of uncertainty; (2) tactical deception; (3) identifying goal-directed behaviour; etc.

    Nothing you've said seems remotely relevant to showing my approach to these issues was somehow inadequate. You simply haven't shown any such thing, all you've done is assert it, without providing any decent arguments to back your claims up. In particular, you haven't spelled out what alternative evidence for a theory of mind there is that I have overlooked, or just how my discussion rests upon false assumptions. (It might help if you could quote a passage from my discussion that you think is false or misleading, and explain why.) You need to do something like this if your criticisms are to be taken seriously. I hope my tone doesn't offend, but I simply find it extremely frustrating when critics refuse to engage with the specifics of what I've written. If you could show that I've said something false, that'd be great. It's only seemingly ungrounded criticisms that annoy me ;)

    ReplyDelete
  13. > The question I tackled was whether animals are capable of understanding mentality, i.e. having thoughts about thoughts.

    Is it possible that being able to think about your own thoughts might be one of the most basic things possibly more basic than being able to think about anything else.

    For example a very simple thing for an animal is to percieve a pain and then take action on that. If that pain is some sort of "mental anguish" lets say "boredom" then an animal might take action to prevent it (in fact I would suggest boredom is potentially even more fundimental than pain - but I can explain that later).

    Then you ask the question did the animal think about itself being bored (and know it is bored and have a concept for boredom)? well probably just as much as it thinks about anything else (obvously if ou have a smaller/less complex brain you probably think less!) - after all the section of the brain is just processing some sort of an input and if anything it is a more direct route when it is a mental state (compared to the less direct perception of external events).

    then we could debate the way "concepts" (signals) move through a brain but it seems the "not thinking about mental states" is the more advanced and restricted form.

    We can then exclude animals from our "theory of mind set" by combining concepts with processing power and other such things of course but then our line is quite arbitrary.

    ReplyDelete
  14. Hi Richard-
    What I was getting at, in a rather flippant way (lost my first, more detailed response!), was the notion that without a common or translatable language, "thoughts about thoughts" can be inferred from behaviour, but "deflationary alternative explanations" are always possible. You describe some very subtle experiments, but they are none-the-less all open to such scepticism.
    Language gives us a look inside the black box. In fact I'd hazard it's our communication with other people rather than our empirical observations of them that lead us to believe in other minds- and rather more radically, that it may only be the aquisition of language that allows us to relect on our own thoughts. Some of the experiments with teaching other animals (eg Lucy a gorilla(?)) some language would seem to indicate they can reflect upon their mental state. (But it begs another question- has the language given them this ability, rather than the ability to express this ability?)

    ReplyDelete
  15. Rob, thanks for clarifying, I see what you mean now. I'm not sure the language/behaviour distinction is as clear-cut as you make out however, since speech is itself a form of behaviour, and it's up to the audience to observe and interpret it appropriately. (I don't know much about this stuff though -- better to ask a psycholinguist, I guess.) For example, there is some evidence that vervet monkey alarm calls do actually have semantic content. (A couple of calls refer to eagles, for example, while other calls refer to leopards, and other vervets can learn to distrust just both eagle calls when a particular vervet repeatedly "cries wolf" with just one eagle call.)

    Anyway, the point is, we can only learn about what their calls mean by observing how they use and respond to them. So before we can find out whether they have any mentalistic terms in their vocabulary, we first have to determine what sort of behaviour would count as evidence for this.

    ReplyDelete
  16. Oh, and while it's true that in principle deflationary interpretations are always possible, some will be more plausible than others (the Dennett quote is relevant here). I think it would be enough to establish that some animals "most likely" have a theory of mind; we can't expect to prove it with absolute certainty.

    ReplyDelete
  17. I was thinking of this *notorious* incident (via wikipedia article on "lying")-
    "The capacity of hominids to lie is noted early and nearly universally in human development and language studies with Great Apes. One famous lie by the latter was when Koko the Gorilla, confronted by her handlers after a tantrum in which she had torn a steel sink out of its moorings, signed in American Sign Language, "cat did it," pointing at her tiny kitten. It is unclear if this was a joke or a genuine attempt at blaming her tiny pet."
    An intention to deceive seems to indicate at least some belief in the mental states of others (would an attempt at humour? it's a more attractive option, but she might just have been amusing herself). A sceptic can still point to other explanations: variations of "it's simply a learnt behaviour"; or it's a thought that's only now possible for Koko, since she's learnt language. If we could question Koko further, we might solve that...
    (FWIW I'm not such a sceptic. But I do think language is the main thing that defines us- and that's shaped our evolution- as a species of ape. Homo linguis is both more apt and less vain-glorious that homo sapiens. And I tend to a pretty radical reductionist position vis-a-vis mental states/brain states: quite a few philosophical problems seem to stem from extrapolations of some variety of mind/body dualism.)

    ReplyDelete
  18. Ah yes, I did discuss deception in the main essay too. I'm not convinced that Koko's is so different in principle, just because it involved symbolic hand signals rather than a more direct form of 'communication' like screaming. In particular, we are in no better position to infer what Koko's intentions were in making the false statement. Was he aiming to give the handlers a false belief? Or did he merely intend to shape their behaviour? (Perhaps Koko believes: "If I sign 'cat did it', then I won't get in trouble", without realising why this is, i.e. that the handlers' behaviour is caused by their beliefs, etc.)

    It's certainly suggestive, I'll grant you that. But the same is true of Paul's false scream to get his mother to chase Mel away, or the vervet who gives a false leopard call to get his enemies to stop chasing him (as they immediately flee into the trees instead). The important phenomenon here is deceptive behaviour generally; it's by no means restricted to linguistic animals.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.