Wednesday, April 16, 2008

Problems for Decision Theories

Andy Egan has a great paper, 'Some Counterexamples to Causal Decision Theory', which effectively makes the case that we (currently) have no adequate formalization of the intuitive principle do what's most likely to bring about the best results.

We begin with Evidential Decision Theory and the injunction to maximize expected value, but it turns out this is really a formalization of the subtly different principle, do what will give you the best evidence that the best things are happening. Suppose you believe that watching TV is very strongly correlated with idiocy (but does not cause it). You want to watch TV, but you really don't want to be an idiot. We can set up the numbers so that the expected value of watching TV is lower, because then it's most likely you're (already) an idiot. So EDT says it's "irrational" for you to decide to watch TV. But that's ridiculous -- whether you decide to watch TV or not won't affect your intelligence (ex hypothesi). That's already fixed, for better or worse, so all you can change now is whether you get the pleasure (such as it is) of watching TV. Clearly the thing to do is to go ahead and do so.

Causal Decision Theory (a la David Lewis) tries to get around this by holding fixed your current views about the causal structure of the world (i.e. ignore the fact that choosing to watch TV would be evidence that you instantiate the common cause of idiocy and TV-watching). This solves the previous problem, but introduces new ones. Suppose that instead of correlating with idiocy, TV-watching correlates with a condition X that makes one vulnerable to having TV turn your brain to mush. If I don't watch TV, I'm probably fine and could watch TV without harm. If I initially assign high credence to this causal structure, then - holding it fixed - CDT advises me to watch TV. But that's nuts. Most people who end up deciding to watch TV have condition X. So if I decide to watch TV, that's new evidence that I'm susceptible to having my brain turned to mush. That is, if I make that decision, I'm probably seriously harming myself by doing so.

So, neither evidential nor causal decision theory is adequate. Though I should note a proviso: all these objections assume that the agent has imperfect introspective access to his own mental states. Otherwise, he could discern whatever states (e.g. beliefs and desires) will cause him to reach a certain decision, and those mental states will provide all the relevant evidence (as to whether he is an idiot, or has condition X, or whatever). The decision itself will provide no further evidence, so these problems will not arise. (Once you have the evidence that you're an idiot or not, you can go ahead and watch TV. In the second case: whether you should watch TV will be settled by the evidence whether you have condition X.) But a fully general decision theory should apply even to agents with introspective blocks.

Andy proposes and then rejects a view he calls lexical ratificationism. The idea is that some decisions are ratifiable (i.e. conditional on your choosing it, it has the highest expected value). You should never choose an unratifiable option (e.g. refraining from watching TV in the first 'idiocy' case) if some ratifiable alternative is available. But sometimes there are no self-ratifying options (as in the 'condition X' case), in which case you should simply follow EDT.

The objection to this view comes from Anil Gupta's 3-option cases. Suppose that most people who smoke cigars have some background medical condition such that they would benefit from smoking cigarettes instead, but suffer great harm if they chose to not smoke at all. Similarly for cigarette smokers -- they would likely benefit from changing to smoking cigars, but suffer harm if they did neither. So neither option is ratifiable (each recommends the other instead). Non-smokers, on the other hand, do best to refrain from smoking, so this option is ratifiable. Still, the thought goes, if you're initially leaning towards cigar smoking, you may have some reason to switch to cigarettes instead, but the one thing you can be sure of is that you shouldn't be a non-smoker. So ratificationism, too, yields the wrong results.

I'm not sure about this objection, for reasons Helen brought to my attention. Note that it can't just be the initial inclination towards one option that is the evidence here -- otherwise you could note your inclination for cigars and decide to smoke cigarettes, no problem. Instead, it must only be your ultimate decision (post-deliberation) that's evidence of the relevant medical condition (never mind how radically implausible this is). But then there's nothing wrong with the ratificationist answer after all. If you're susceptible to being persuaded by ratificationism not to smoke, then (ex hypothesi) that's very strong evidence that you don't have the other medical conditions, and so not-smoking really is most likely to be best for you. A mere initial inclination towards cigars is no evidence to the contrary.

An interesting point Andy made in response is that this might work for first-personal guidance, but we also want a decision theory to apply third-personally, i.e. to tell us when others' decisions are rationally criticizable. And it would be bizarre, in this case, to tell a cigar smoker that they should have chosen to be a non-smoker instead, when (given that they ultimately chose to smoke cigars) they probably have a medical condition that would make non-smoking harmful to them.

I think the upshot of all this is that we can't give any third-personal advice in these problem cases until we see what decision the person themselves made. Until then, the only normative guidance on offer is first-personal, and lexical ratificationism gets that exactly right.

What do you think?

5 comments:

  1. If you initially don't watch TV, then take this fact as evidence that you don't have condition X and consequently start watching TV, I don't think this counts as evidence for condition X.

    ReplyDelete
  2. It mangles the case if you're allowed to revisit your decision later. So suppose you can't: it's a one-off decision, to which you're bound for life.

    (Note also that we're stipulating that it's your ultimate decision, not your initial inclination, which is considered evidence here.)

    ReplyDelete
  3. If knowing your mental states solves everything, then why not figure out the best decision for each state, and then average over the states using your beliefs about them, but NOT updating those beliefs on the fact of your decision?

    ReplyDelete
  4. Er, I meant figure out the expected utility for each mental state.

    ReplyDelete
  5. Isn't that basically what Lewisian causal decision theory involves? The problem is that in cases like 'Condition X', it's irrational not to update your belief about your mental states (+ background causal structures) based on your ultimate decision. You end up making a decision that will likely turn your brain to mush.

    Compare a simpler example from Andy's paper: The Psychopath Button. You can press a button to kill all psychopaths (yay!), except that you're extremely confident that only a psychopath would actually press such a button (doh!). Your decision procedure, like CDT, recommends pressing the button. After making your decision, and belatedly updating your beliefs, you will thus expect to die.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.