Monday, January 23, 2006

A Paradox for Subjective Rationality

I've written before about why rationality can't be too subjective. But here's another reason: it leads to contradiction. Consider Kolodny's two "core requirements" of rationality:
C+: If one believes that one has conclusive reason to have A, then one is rationally required to have A; and
C-: If one believes that one lacks sufficient reason to have A, then one is rationally required not to have A.

Now consider someone who believes that they have conclusive reason to do what they believe they lack sufficient reason to do. (Granted, this is a very odd belief to have. But I think it is possible. Perhaps they've been told that their beliefs have been manipulated by an evil demon into being unreliable [update: this is explained further in my comments below]. Or perhaps they're just incredibly irrational. Whatever.)

It would then follow from C+ that they are rationally required to do what they believe they lack sufficient reason to do. Call this action 'X'. That is, we have so far established that they are rationally required to X. But recall that our agent believes that they lack sufficient reason to X. It thus follows from C- that they are rationally required not to X. Putting these two results together, we find that our poor confused agent is both rationally required to X and rationally required not to X!

This violates what I will call the "consistency of rational requirements" principle:
(CRR) It is not possible for one to be both rationally required to A, and rationally required not to A.

In other words, rationality cannot make contradictory demands of us. It cannot demand both that we do something, and that we don't do it. That's just not a fair ask.

If (CRR) is true, as I think it is, then the case I provide above shows that Kolodny's "core requirements" cannot be true.

7 comments:

  1. I like this argument, and I think CRR makes a plausible meta-requirement for .

    I can't remember if Kolodny considers cases like this, but there's perhaps a fairly straightforward counterexample to C-, which at least suggests we should be cautious about it. Suppose someone is a skeptic, of a broadly Humean sort, who finds convincing various arguments both that A may well be wrong and that there is no reason for believing A that withstands critical examination; but who believes on good reasons, and perhaps rightly, that as a matter of our construction we can't avoid believing A. It isn't clear that the skeptic's believing A while not believing he has sufficent reason to believe it is irrational, although obviously it's not an optimal state of rationality. There is no contradiction in the claim that given the way we are set up, it is futile for us to try not to believe A, even though we have what we believe to be conclusive reasons for believing that A is wrong; despite the fact that this will always put us in a state of contradiction (or, as Hume suggests, oscillation between the two - and I think, when considering rational requirements, we need at least to consider whether there is any set of conditions under which the most rational thing to do would be to oscillate between two contradictory beliefs, depending on the context).

    Such a state is consistent with CRR because the skeptic doesn't have to hold that we are rationally required to believe A, but only that we can't avoid doing so It does conflict with Derek's RR2, but I'm not convinced that RR2 is plausible. Suppose I believe B and C, and B implies A but C implies ~A; on accepting these inferences and then, in housekeeping, applying RR2, I would be forced to conclude that I am rationally required not to believe A (because I believe ~A) and not to believe ~A (because I believe A). This doesn't seem reasonable to me, since it must also be rational to reject only one of the two; it would be more plausible to say that (RR3) we are rationally required not believe one member of a contradictory pair ('at least one member' if we reject excluded middle). (RR1 doesn't seem to me to be plausible, either, by the way; it seems to me that what is rationally required given a conjunctive belief depends on the rational status of the conjuncts in the belief, i.e., something like RR1 may be true, but only with qualifications.)

    ReplyDelete
  2. I think that your initial premise is simply contradictory.

    I would re-phrase to not use the word conclusive, which you are perhaps using to escape the contradiction.

    "Consider someone who believes that they have sufficient reason to do what the believe they lack sufficient reason to do".

    That is clearly absurd -- one cannot consistently believe something and its negation at the same time.

    Perhaps you could explain what constitutes a "conclusive reason" which is different from sufficient reason.

    Cheers,
    -MP

    ReplyDelete
  3. Niko - thanks for your reply, and for pointing out that crucial footnote I missed!

    "Whatever she does next, she will be in some way irrational. This seems to me, however, the right thing to say about such a case. She has backed herself into a corner."

    I agree with that, but I don't think it necessarily supports your position. To say "you will be irrational if you do not X" is not the same thing as saying "you are rationally required to X". For instance, perhaps they are simply irrational to begin with, and their X-ing has nothing to do with it. (This seems plausible in the cases we are discussing, where the confused agent begins with irrational or even contradictory beliefs.) So I'm not convinced that we can move from your evaluation quoted above to the claim that rationality positively demands that our agent performs a contradiction. Perhaps the following isn't much of an argument, but it just strikes me as a bizarre thing for rationality to ask of us! CCR captures the intuitive idea that rationality offers sensible guidance. "Do both X and not-X" is not sensible guidance!

    On the other hand, I have to agree that your Transparency account makes it clear why CCR is (purportedly) false. If rationality is merely a matter of doing what it seems to you that you ought, then having contradictory beliefs about the latter will straightforwardly lead to contradictory rational requirements. But if we find CCR independently plausible (as I think I do) then that might lead us to reject your Transparency account.

    Derek - as stated, your proposed requirements were narrow-scope ones, and false. The true versions are wide-scope: as you say, they can be satisfied by rejecting the antecedent rather than complying with the consequent. That is, the true version of RR2 would say:

    (RR2-wide) Rationality requires that: if one believes that A, then one does not believe that ~A.
    Equivalently: Rationality requires that: one either does not believe that A, or does not believe that ~A.

    This won't lead to any violation of CCR, because you cannot conclude from my believing that A that rationality requires me to not believe that ~A. Perhaps it rather requires me to reject my prior A-belief.

    (If this doesn't entirely make sense, don't worry, I'll probably write another post tomorrow on "wide-scope" stuff.)

    MP - it isn't contradictory for a person to have contradictory beliefs. "Sam believes that P and not-P" is perfectly possible. What isn't possible is for Sam to be correct in this. Too bad for Sam. But our theories should be able to cope with his irrationality nonetheless :-)

    Incidentally, I was hoping my scenario might even be able to avoid ascribing outright lunacy to our agent. Suppose (in that distant possible world where something like Christianity is true) that an angel tells Sally that she is about to be possessed by the devil. The angel warns Sally that this will make her faculties very unreliable, and she will become disposed to initially conclude the very opposite of what she ought. Heeding this advice, Sally decides to outwit the devil by, in each new case, doing the opposite of whatever she initially concludes she ought.

    Sally thus believes that she ought to do whatever she initially believes she ought not to do, and vice versa. This is complicated, but, I think, coherent. She might reason about a problem and initially conclude "I ought to do X". And then she might further reason "Ah, but the devil fools me! I ought instead to not-X." In the middle of her reasoning, we can run through the argument in my main post to show that Kolodny's requirements make contradictory demands of her. But I don't think rationality is making contradictory demands of her in this case. (I do not think that she is rationally required to X at all.)

    I hope filling out the scenario like this makes it a little clearer what I'm trying to get at here.

    ReplyDelete
  4. I don't have much more to contribute to this thread, but I'm enjoying it considerably.

    Derek,

    RR2 implies RR3, I think, but I don't think the reverse is true. Given an initial contradictory conjunction, and an additional RR2 implies that I ought not to believe either conjunct, whereas an additional RR3 implies that I ought not to believe one of them. The same cases will violate either, of course; but, unless I misunderstood you, they are not the same requirement.

    But you're right that I seem to be thinking of rational requirement in a different way, and I'll have to think further about it. Part of my reason for thinking about it this way that I'm inclined to think that any account of rational requirements has to take into account paraconsistent initial conditions, for much the same reason that AI theorists are interested in paraconsistent logics: rational people will, in fact, very often be in a paraconsistent state, due to changing beliefs, etc. An account of rational requirements has to account for rationality under these circumstances -- otherwise, we're really just talking about logical implication in an unnecessarily round-about way. And talking about logical implication doesn't on its own tell us anything about what we ought to do, rationally speaking, which is what I was taking rational requirement really to be about.

    Niko,

    I don't think it's so much a matter of 'is irrational in X-ing' implying 'could have done otherwise', but the sort of inability to do otherwise that's on the table. It's analogous to Berkeley's argument against skepticism about the external world: the skeptic about the external world uses a prejudgment about the way things are to split the world into a merely apparent external world and a real external world, and then identifies the former with what we call the external world and the latter with something we can't know. Berkeley's point is that this isn't a serious attempt to give an account of external world, but instead an attempt to fit the external world to arbitrary assumptions. Likewise, the skeptic in the rationality case could (as Hume does in Treatise 1.4.1) regard his scenario as a general scenario to which everyone is subject, and then conclude that we can't seriously do anything with an account of rational requirement which understands it in such a way that, whether in general or on a given point, we are always and necessarily in violation of our rational requirements. If we had a good argument for taking rational requirements to be necessities recognized as such a priori, we could get around this; if they are not necessary, or not recognizable a priori, I think the skeptic has a bit more wiggle-room for arguing that he's being rational.

    ReplyDelete
  5. It doesn't make things much clearer to me. You can prove anything from a contradiction, this is well-known. Including your theories. (As an aside, it it possible to prove a contradiction from a contradiction?)

    Regardless, I fail to see what is interesting about pointing out that someone with inconsistent beliefs is going to be caught up in a conflict of intentions.

    What do you mean by rationality other than logic? Rationality and logic can certainly make contradictory demands of us -- it does so all the time. There is a phrase in computer science -- "Garbage in Garbage Out".

    The way I see it, conflict plays a central role in forcing us to re-evaluate our beliefs. Clearly we don't operate on boolean logic, but numerical probabilities and strengths. We frequently have conflicting goals and desires, and we don't vanish into a puddle of logic.

    I haven't read Kolodny, but I think it's only fair to reasonable to accept that Kolodny's arguments are based on having noncontradictory beliefs. Maybe I'll get round to it sometime when I'm not at work :)

    Cheers,
    -MP

    ReplyDelete
  6. I have no reason to leave a comment.

    ReplyDelete
  7. MP, as explained in my previous comment, none of my premises were themselves contradictory. (To say "X is a contradiction", or "S believes a contradiction", or to otherwise talk about contradictions, is not itself a contradiction.)

    Update to my response to Derek: the promised post is here.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.