Saturday, October 04, 2008

Skepticism and Wacky Priors

I find it helpful to think about the challenge of skepticism from the perspective of Bayesian confirmation. We begin with (i) a space of 'possible worlds' or hypothetical scenarios which represent all the different ways the world could be, and (ii) your 'priors' or initial distribution of credence over these scenarios -- i.e. which ones you believe to be more or less likely. Then, when you acquire new evidence E, you update your beliefs by ruling out those scenarios that are inconsistent with E, and redistribute their old credence values over the remaining options so that your new degree of belief in each hypothesis X matches your old conditional probability X|E.

Now, the essence of the skeptic's challenge is that various 'skeptical scenarios' -- that I'm a BIV, or that the sun will not rise tomorrow -- would seem as well confirmed by our empirical evidence as common-sense hypotheses are. Compare three rival hypotheses:

(H1) The Sun exploded in 1999
(H2) Actual history + Sun doesn't rise tomorrow
(H3) Actual history + Sun rises tomorrow

Actual history lets us rule out H1, but as a matter of form it is neutral between H2 and H3. So if we think H3 is more likely, this must be reflected in our prior probabilities. We must hold, a priori, that some scenarios are objectively more probable or likely to eventuate than others -- e.g. that scenarios which start with our actual history and are followed by another sunrise are a priori more probable (collectively) than the other scenarios which start with our actual history but are followed by something different.

This is what the anti-skeptic is committed to. There's not necessarily anything wrong with affirming such claims, but it would be nice to have an explanation why some prior probability rather than another is the rational one to have. (Compare the "counter-inductivist" who assigns the opposite prior probabilities, and thus takes Actual History to count as strong evidence that the Sun won't rise tomorrow! Or the solipsist who thinks the BIV scenario is more likely than the external-world scenario we believe in. We think they're being unreasonable, but on what grounds?)

My hope is that general rational principles -- of coherence, systematic unity, simplicity, and the like -- can provide the basis we need for privileging some priors over others. Otherwise, we may be committed to thinking that it's just a brute, inexplicable fact which priors a rational agent ought to have, which seems a pretty wild claim -- and an awfully fortunate coincidence if the One True Prior happens to be ours! Still, if it comes to that, we may think there are worse things than dogmatism.

11 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. "Actual history lets us rule out H1,"
    You can't assign 'actual history' a probability of 1 based on your senses and memory.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. "I think this is a mistatement."

    Hmm, I thought that was how it was introduced in our decision theory class last year, but perhaps I misremember. Can you suggest a better way to introduce Bayesian updating in a couple of sentences?

    "You can't assign 'actual history' a probability of 1 based on your senses and memory."

    That's true, but doesn't really matter for the point I was making there. To pursue this tangent for a moment though, it does raise the interesting question of how an externalist conception of evidence would affect the Bayesian picture. If someone held that our evidence consists in worldly facts (cf. the objects of our perceptions) rather than mere subjective mental states, they could maintain that we do have knowledge of the external world on empirical grounds after all. Even if my prior were indifferent between realism and anti-realism, they might insist that I (but not my BIV counterpart) ought to update on the fact that I have hands (and not merely that I seem to). This curious strategy would seem a non-starter when it comes to inductive skepticism, however, as the example of updating on the external fact of 'actual history' shows. Unless they just need to appeal to a different kind of external content -- our knowledge of the causal bases of things, perhaps?

    "Do you think someone who accepts the Simulation Hypothesis is being unreasonable?"

    In the strict sense of 'less than perfectly rational', I suppose so, though I don't have very strong opinions about it. In the more colloquial sense: I certainly don't think it's an egregiously unreasonable view. But again, it's not really the focus of this post. Someone who believes the Simulation Hypothesis (or counter-induction, for that matter) can reach the same general conclusions with appropriate substitutions in detail.

    "A prior that allows you to assert that your prior is coincidentally right is actually rather difficult to find [pdf]"

    Now that's interesting. Hanson actually presented this to our class last semester, but I'm not sure he uses 'priors' in the same idealized way a philosopher would (I vaguely recall someone suggesting that his 'pre-priors' are more like what we had in mind), so it's hard to know how to translate much of his paper.

    The key claim seems to be: "Without some basis for believing that the process that produced your prior was substantially better at tracking truth than the process that produced other peoples’ priors, you appear to have no basis for believing that beliefs based on your prior, are more accurate than beliefs based on other peoples’ priors."

    But that seems false -- a result of the kind of epistemic subjectivism we've discussed in previous threads. Suppose someone flips a coin, and if it lands heads they'll tweak your brain so that you become completely incoherent without realizing it. Secretly, it lands tails. You wake up the next day believing that 2+2=4, as usual. Are you unreasonable to believe this -- should you grant it only a 50% chance, due to the coinflip? No, that'd be daft, you have sufficient reason to believe that 2+2=4, and this is not defeated by the mere fact that you narrowly avoided being in a situation where you'd lose all grip on reason. Your grip on reason is just fine.

    Presumably we can say the same thing about the coincidentally perfectly rational agent. Their perfect prior will be self-endorsing, and not undermined by the mere fact that the agent was lucky to end up so rational. See also my post on 'Meta-Coherence vs. Humble Convictions'.

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. Hypotheses are sets of possible worlds (or 'scenarios'). When you encounter E, you rule out all the not-E worlds -- which presumably overlap significantly with X2 worlds (why else would the conditional probability of X2|E be so much lower?). Possible worlds are cheap, and are 'ruled out' all the time.

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. Of course, centered possible worlds are even cheaper, and my account extends in the obvious way to cover them. (An observation O1 is evidence for a universe U1 where O1s are relatively common, over U2 where they are rare, precisely because there are proportionally more non-O1 centers in U2 which our observation decisively rules out.)

    ReplyDelete
  9. I have to say Carl: that Bostrom article you just posted is philosophy at its absolute worst, and is the reason why philosophers have a bad name with scientists. Cherry-picking and over-interpreting controversial physics hypotheses is not a respectable way to go about doing philosophy. I do suppose the article provides a good illustration of Bayesian reasoning though; I'll give it that much.

    ReplyDelete
  10. "centered possible worlds are even cheaper, and my account extends in the obvious way to cover them."
    If we're talking about centered worlds, then we can take account of the fact that different hypotheses predict the same observations (E) being made a different number of times, but this is critical.

    ReplyDelete
  11. Richard,

    I believe your hope is vindicated by a generalized principle of induction.

    Most people restrict induction to physics, but I don't understand this restriction. For example, how do I know that when I add two numbers, they will always have the same sum? Maybe I added incorrectly, or maybe the sum (and other logical truths) might change from time to time. Though we often speak of logical necessities, our belief in such things is founded on fallible experiences.

    The reason that logical truths are more certain is that logical/mathematical experiments are perfectly controlled. We know all the axioms. We can always make the uncertainty in our theorems arbitrarily small just by repeating the "mathematical experiment". This is unlike physics where we're generally uncertain whether our experiment is properly controlled or has some systematic error.

    In my opinion, mental induction (the assumption that past mental experiences are a guide to future ones) is a necessary assumption for rationality itself. Once we make the assumption of mental induction for rationality's sake, the assumption of induction in physics is free because physics is a subset of experience in general.

    If the skeptic is skeptical about induction, he ought to be skeptical about reasoning in general.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.