Sunday, May 31, 2009

Belief as a Hybrid Notion

I'm always puzzled when I read philosophers who treat belief and knowledge as philosophically important states. I find it much more natural to think of [rational] credences (degrees of belief) as fundamental, and to define 'belief' derivatively as, say, "sufficiently high credence for the purposes at hand" (and, similarly, 'Knowledge' as Sufficiently Safe Belief). So understood, belief and knowledge are philosophical outputs, not inputs. Indeed, considerations of stakes-sensitivity suggest that they aren't even purely epistemic notions, but rather a hybrid of the epistemic (rational credence) and the practical (how much credence is required to justify certain actions).

To bring this out, consider the following kind of case. Sally from sales knocks on the door of an abandoned-looking house. Nobody answers, so she concludes that nobody is home and goes on her way. A few minutes later, Dan the demolition worker does likewise, and concludes likewise, only he returns to his wrecking rig to commence demolition. Fill in the background details so that it seems that Dan (unlike Sally) isn't justified in acting on the belief that the house is empty.

Now, if you think that all-out belief is philosophically significant, it seems you're faced with two possible interpretations of how stakes-sensitivity is affecting justified action in this case:
(1) It could be that Dan is, like Sally, perfectly justified in believing that the house is empty, but that the higher stakes of the situation render his justified belief 'unactionable' until supported on firmer grounds; OR
(2) It could be that the higher stakes render Dan's belief unjustified.

But surely it's clear on reflection that this is a distinction without a difference. The only real question here is what degree of rational credence is required to justify action in the face of this or that risk. We capture everything of philosophical significance by noting that Dan and Sally both have rational credence of around (say) 80% that the house is empty, and that this degree of belief is sufficient for purposes of taking your sales pitch to the next door, but not sufficient for demolishing the house and killing anyone who may still be inside. Moreover, it is in virtue of these practical normative facts that we may attribute justified all-out belief / knowledge (that the house is empty) to Sally but not to Dan.

21 comments:

  1. As you've laid it out here, it looks like degree of credence is actually doing no serious work: what's doing the work is really a sort of means-end reasoning, where one determines that one has done the type of thing required for one's end. One can have the same sort of move in the all-out belief case; in fact, doing so, and regarding it as distinct from justification, is what makes the distinction between (1) and (2) possible: justification and actionability are given distinct accounts, and so (1) and (2) are simply different matters.

    I take it, however, that on your view justification is derivative: it's a form of actionability. But how does one link actionability with credences? One can have a view of justification where justification just is having such-and-such credence; but if we make justification a matter of actionability, actionability can't have such an account, so there would need to be some substantive way of determining that in these circumstances such-and-such credence allows for such-and-such action, &c. What do you have in mind for sorting this sort of thing out?

    ReplyDelete
  2. Hi Brandon, I'm not sure I follow you. The kind of "means-end reasoning" I have in mind is something like an expected utility calculation, which gives subjective probabilities (credences) a very central role indeed. The reason why Dan shouldn't demolish the house is precisely that he doesn't have [justification for] a sufficiently high degree of belief that the house is empty. 80% isn't sure enough when lives are on the line.

    It's far less clear what the all-out believer can say here. The problem, as I've identified it, is that Dan's epistemic state is lacking (relative to the purposes at hand). He's justified in being 80% sure of the relevant proposition (that the house is empty), but that's not enough to take the risk -- you need to be 99.99% sure. How do you get this result without appeal to such graded notions? If the relevant doxastic attitude is all-out belief, and Dan is justified in having this attitude towards the relevant proposition, then what more could you ask for? Once you have a justified belief, what need could you have for further evidence? This only makes sense if belief is a graded notion, so that further evidence for a proposition would actually license a greater degree of belief in it. Otherwise, it would seem to make no difference. With or without the further evidence, our doxastic state would be unchanged: all-out belief, all the time.

    In response to your last paragraph: no, I think that [degree of] justification is primitive (i.e. having evidence that supports a certain credence or subjective probability - 80% or whatever - in a proposition). I don't think there's any independent question of 'actionability'. There's just the question how what degree of justified credence suffices to justify action (as determined by expected utility considerations). There's no possibility of having the required doxastic attitude [e.g. 99.99% credence] but this somehow failing to be 'actionable' despite being justified.

    ReplyDelete
  3. OK, that's helpful, particularly about the way justification relates to actionability, although I'm still not sure I've got it all.

    Proportion or adequacy of means to end is itself a graded notion; so if someone thinks that actionability required it, they already have a graded notion, whether they take belief to be that graded notion or not. For instance, they could hold that Dan's belief is only one thing needed to reach the right level of adequacy to the end, safely demolishing the house, whereas Sally needs nothing more to reach the right level of adequacy to her end, which is little more than seeing if she can get a potential customer to answer the door. What is more, even you seem to have to do something like this anyway, since the expected utility varies from case to case. And given that, one can ask whether degree of rational credence is actually doing any work that degree of adequacy of means can't do even in the context of all-out belief.

    ReplyDelete
  4. This is Keith DeRose's 'contextualism', right? You're just picking it apart to see how it ticks?

    ReplyDelete
  5. A subject-sensitive invariantist could tell a story without appeal to degrees of belief. If justification were a graded notion, then one could simply say that due to the shift in the stakes regarding Dan and Sally's practical interests, the degree of justification attributed to both parties is sufficient for knowledge in Sally's case but not in Dan's case.

    Additionally, safety was originally developed by Ernie Sosa, not DeRose. DeRose has attacked this position in several places, and while it has some intuitive appeal, it has several severe problems. The least of which being that a counterfactual conditional with a true antecedent is trivially true in Stalnaker-Lewis semantics (and most other standard frameworks).

    While DeRose does forward a substantive epistemic theory which has a mix of safety and sensetivity basing at times, he calls this his toy theory, as his contextualism is a theory about the use of the verb 'knows' rather than an analysis of the necessary and sufficient conditions on knowledge.

    ReplyDelete
  6. This seems very similar to a Bayesian conception of degrees of belief.

    ReplyDelete
  7. You say "the only real question is what degree of rational credence is required to justify the action in the face of this or that risk?" I say there are two interesting questions here: one epistemological, the other practical. You have focused upon the practical question, fine. But that is not an ARGUMENT against either all-out-belief and all-out-belief justification.

    The notion of epistemic justification earns its keep elsewhere, like in explaining why it is not appropriate for BonJour's Norman the reliable clairvoyant to hold on to the beliefs he forms by means of his clairvoyance.

    So what if epistemic justification has little to tell us about when a belief is actionable?

    ReplyDelete
  8. Jack - right, it's also a real question what degree of credence is rationally justified by our evidence.

    What isn't a real question (I meant to suggest) is whether (1) Dan's all-out belief is justified but 'unactionable' in the sense of not being eligible to serve as a premise in his practical reasoning ("the house is empty, therefore I can demolish it"), or whether instead (2) Dan's belief that the house is empty somehow became all-out unjustified due to the higher stakes. There is no meaningful difference here, as becomes clear when we describe the case using graded rather than all-out belief.

    Fundamentally, we can say what degree of belief is epistemically justified (presumably the same for Dan and Sally), and what degree of belief is needed to justify various actions, and that's it. There's no further question of whether someone might have the relevant doxastic state, have it justified, but somehow not have it 'actionable' (or eligible to serve in practical reasoning).

    So I'm not arguing against epistemic justification. I'm arguing against taking it to apply in the first instance to all-out beliefs, rather than to degrees of belief.

    P.S. Note that it would be bizarre if epistemic justification weren't closely linked to actionability. The clairvoyant shouldn't demolish a building they merely intuit to be empty, even if their intuitions turns out to be incredibly reliable. If you aren't justified in your credences, then you aren't justified in acting on them.

    ReplyDelete
  9. Thom and Ezra - I'm sympathetic to something like DeRose's "toy theory" (my linked post on knowledge draws heavily on his ideas). But I should clarify that my purpose in this post was not to put forward any particular analysis of what knowledge is, but rather to suggest a deflationary attitude towards the whole enterprise. Knowledge doesn't fundamentally matter; some more basic epistemic property does (maybe safety, maybe some kind of internalist justification; maybe both, for different purposes).

    ReplyDelete
  10. Brandon - it occurs to me that 'actionable' may have been an unfortunate choice of words. You seem to be thinking of it in the everyday sense of whether the state in question provides adequate grounds for action. I should have clarified that I am using it as a technical term, to denote whether the state in question is eligible to serve in your practical reasoning.

    If a belief is 'actionable' (in my sense), then you can take the content of that belief (e.g. "the house is empty") as a premise in your practical reasoning.

    If our credences are 'actionable', then we can take the corresponding probabilities (e.g. "the house is 80% likely to be empty") as premises in our practical reasoning.

    So Graded-Dan's credal state is perfectly actionable, in the sense that he can reason from the premise "the house is 80% likely to be empty". It's just that he can't derive the required conclusion ("I may demolish the house") from this weak premise.

    AllOut-Dan, by contrast, does not seem to have an actionable all-out belief, since he is not allowed to reason from the premise that the house is empty. But it's weird -- theoretically messy -- to separate actionability (in this technical sense) from justification. If his belief is really justified, why can't he take it to be true [i.e. believe it] in his practical reasoning?

    The story is much neater for graded belief. We can say that whenever someone has a justified credence of x in P, they may take P to have probability x in their practical reasoning.

    This strikes me as a powerful reason to prefer the graded conception of belief.

    ReplyDelete
  11. Thanks -- that's very helpful.

    ReplyDelete
  12. Richard,

    While I do not disagree with a conception of graded belief, I think that there is a place for both all-out beliefs and rational credences in one's epistemology. This is where I was going with my comment regarding subject-sensitive invariantism, which I think gives a perfectly acceptable alternate explanation here. To be specific, an SSI theorist would require one to know some proposition in order to allow its use as a premise in one's practical reasoning. If we take one's rational credence toward a propositions as one's level of subjective justification toward one's all-out belief that p, then we have an adequate explanation for All-Out Dan.

    For example, in Dan's demolition case, he cannot take the content of his belief that 'the house is empty' as a premise in his practical reasoning due to it not meeting the level of justification necessary for knowledge. He could, however, take his belief that his credence toward the proposition 'the house is empty' is 80% as a premise, as that is a second order belief, and he is in a much better position to evaluate his second order beliefs (by reflection).

    I like this explanation better than the full-out deflationary attitude due to its flexibility when dealing with non-ideally-rational agents. Most people could not articulate specific credential levels toward all the propositions they believe. It is true that in certain situations we do make explicit probalistic calculations, but I tend to think these are more likely hueristic than not.

    Secondly, I made a slight mistake when speaking of safety, which I would like to clarify. Safety is the contrapositive of sensitivity. Sensitivity is the counterfactual conditional ~p []-> ~Bp (if p were not the case, one would not believe p), where safety is Bp []-> p (if you were to believe p, p would be the case). Sensitivity was forwarded by Nozick and Dretske, but it has fallen out of fashion. Safety is more popular now, but not in its naive form here. This is due to the reason that a counterfactual conditional such as this is trivially true if its antecedent and consequent are true. So take an example, suppose, unbeknownst to me, I am in fake barn country. I point to a barn, say 'that is a barn', believe it to be a barn, and it just so happens that it is the only real barn in a 100 mile radius. In this case I (1) believe that there is a barn in front of me (2) there is a barn in front of me. In this case the safety counterfactual is trivially satisfied, regardless of whether the belief in question is 'safe' in the intuitive sense. (This leads people to push back the safety requirement to safety-basing, the process on which your belief is based must be safe, rather than the belief itself.)

    ReplyDelete
  13. Hi Ezra, my account doesn't (that I can tell) require agents to "articulate specific" credences. Non-ideal agents can reason from imprecise credences ("p is extremely likely") at least as easily as they can reason from knowledge.

    My post suggested that the [stake-sensitive] justificatory component of knowledge is completely derivative of the graded story I'm telling. (That is: you have the requisite justification iff you're epistemically justified in having sufficiently high credence to practically justify the relevant actions.) Knowledge then adds an external truth condition on top of this. Since it requires everything I do and more, I don't see how an agent could ever be in a better position to reason from knowledge than to simply reason from the justified high-credence component.

    ReplyDelete
  14. It would seem to me that one is in a better position if their beliefs are true. To be specific, if 'highly likely' beliefs are actionable, then it would seem that if you were holding a lottery ticket in a fair, 1,000,000 ticket lottery, where you had a 99.9999% chance that the ticket was a losing ticket, you should sell the ticket for a very marginal sum previous to the drawing (it is going to lose, after all), even if the payout is astronomical. I don't see many people selling their lottery tickets.

    ReplyDelete
  15. Ezra, I never said that any old 'highly likely' beliefs are actionable, so I don't see the relevance of your previous comment.

    If the payout is so "astronomical" as to make the expected value of the ticket greater than "very marginal", even given 99.9999% credence that it will lose, then that isn't "sufficiently high credence to practically justify the relevant actions".

    ReplyDelete
  16. (I also don't see what the slim-but-worth-it chance of astronomical winnings has to do with true beliefs putting you in a [rationally?] "better position". People can reason reasonably from justified false beliefs -- or even Gettiered JTBs!)

    ReplyDelete
  17. I am starting to become confused about what you mean by credences and graded beliefs. I was assuming you meant something along the lines of a bayesian probability calculus, where an agent is rational when their set of beliefs satisfy the kolmogorovian axioms (ie. you can't make dutch books against them). Non-ideal reasoners acting based on unspecific levels of credence would not, in this technical sense, be acting rationally.

    People can 'reason reasonably' from whatever premises to whatever conclusion they want, as long as their argument is valid. What a truth condition secures is that their argument is sound.

    The point of the lottery example is that there are some situations where having an incredibly high degree of credence--but not knowledge--is not sufficient to justify one's actions. The point here is that if one knew that one was going to lose, one would be justified in selling the ticket. The difference between a credence of 1 and a credence of 99.9999 is what makes the difference. If you accept that the degree of credence here is not sufficient to justify action, then it would seem that you have accepted that in certain situations you do in fact need the external truth condition to justify action.

    ReplyDelete
  18. "If you accept that the degree of credence here is not sufficient to justify action, then it would seem that you have accepted that in certain situations you do in fact need the external truth condition to justify action."

    Not at all. The problem, to repeat for the third time, is that the credence isn't sufficiently high given the stakes -- as a cursory glance at the expected utility formula should make clear.

    (To spell it out: if the astronomical payoff is x, then your credences will justify selling the ticket for 1 just in case you think the chance of winning is less than 1/x. That is, you credence must be at least (x-1)/x that your ticket will lose. Only then is the expected value of the ticket less than the selling price of 1. Knowledge has nothing to do with it.)

    P.S. I don't want to get distracted by all the confusions implicit in the first half of your comment. But let me quickly say: (1) It should be clear enough what credences are, even if it's disputed what makes them rational. (2) It's just plain false that "people can 'reason reasonably' from whatever premises to whatever conclusion they want, as long as their argument is valid". Many valid arguments are unreasonable. (3) More generally, it just seems fundamentally misguided to assess practical reasoning in terms of the deductive qualities of soundness and validity.

    ReplyDelete
  19. In most state lotteries the payout is less than the odds that any given ticket will win. The nice thing about counterexamples is that you can stipulate these matters. Even if the credence is sufficiently high given the stakes--that is, the expected utility of the lottery payout is less than the amount offered by the ticket buyout--, intuitively, most people will not sell their lottery ticket before the drawing. If, however, the subject knows that their ticket will lose, nothing would stop them from selling the ticket.

    To look at a concrete case: The odds of winning the Texas lottery are 1:25,827,165. The estimated payout is currently $12,000,000. The ticket price is $5. So the expected utility of buying a ticket is E(Winning Ticket) = (-$5 x 25827154/25827165) + ($12000000 x 1/25827165) = -4 + 0.46 = -$3.54

    So the expected value of a $5 ticket is $1.46. The chance of winning is 3.87x10^(-8). I seriously doubt that if I stood by a gas station and offered to buy lottery tickets from people for $2 I would have any success (1.67x10^-7 > 3.87x10^-8). The reason people even play with these kind of odds is precisely due to the fact that they do not know that they are not going to win.

    P.S. When you have an argument against operationalizing 'reasonability' in the way I do, or perhaps a competing interpretation, or even some examples of unreasonable valid arguments, let me know.

    ReplyDelete
  20. I'm talking about what makes for good practical reasoning. You appear to be talking about something else.

    ReplyDelete
  21. (Since this discussion doesn't seem particularly fruitful, I'd rather not continue it further here. But feel free to post the link if you decide to write up a further response on your own blog.)

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.