Wednesday, July 23, 2008

Evidence, Reasons and Normative Doubts

Follow-up to Introducing 'Merely Normative' Risk.

Andrew Sepielli (in correspondence) points out that empirical and normative hypotheses alike can fit into the following schema:
1. If hypothesis H obtains, then I have objective reason not to φ.
2. There is some non-zero subjective probability that H obtains.
3. Therefore, I have belief-relative reason not to φ.

One immediate problem is that subjective or 'belief-relative' reasons (the sorts of things that follow directly from one's arbitrary beliefs) have no normative significance. Practical rationality is instead a matter of evidence-relative reasons. So let's amend the schema slightly:
1. If hypothesis H obtains, then I have objective reason not to φ.
2. Given my evidence, there is some non-zero probability that H obtains.
3. Therefore, I have evidence-relative reason not to φ.

Premise (2) arguably only makes sense for contingent, empirical hypotheses. Since purely normative claims are non-contingent, there are not multiple possibilities compatible with our evidence. The necessary truths of a priori philosophy are entailed by anything whatsoever; a fortiori, they are entailed by our evidence. If type-F killing is morally permissible (in circumstances C), then it is necessarily so; there is no possibility that it be otherwise. I have sufficient evidence to appreciate this, since no evidence is required at all -- just the faculty of reason.

Admittedly, we're not perfectly rational, so one could hardly expect us to get all this right. But that is just to say that our (even practical) reasoning will often go astray. It is not to say that our failures of rationality are perfectly rational after all. Whether we should, rationally, oppose (say) the killing of fetuses with type-F mental faculties depends entirely on which view is more reasonable, and not any facts merely about our personal psychologies or subjective credences.

Or do you think we need to utilize an even more subjective conception of rationality? I have some sympathy for the idea that we need a notion of reasonableness or non-ideal rationality that we can more reliably follow. But how would this go, exactly?

27 comments:

  1. Interesting... are purely normative claims really non-contingent though? (Is there a possible world where murder, say, is not wrong?) I suppose this depends on the sort of metaethical questions that I like to steer clear of, but it doesn't seem like normative truths are *obviously* necessary.

    (Which is why I prefer my version of this same objection, which doesn't require a position on whether normative truths are necessary or contingent, just on the notion that we don't have access to the kind of information needed to make well-grounded probability statements about them.)

    ReplyDelete
  2. "are purely normative claims really non-contingent though?"

    Yes. This becomes clearer if I state it slightly differently: the normative supervenes on the non-normative. There's no possible world exactly like ours in all natural respects, but where Hitler was right to carry out the Holocaust. The wrongness of it is thus non-contingent.

    Note, though, that we must fully specify the precise act, including its circumstances (as per my parenthetical condition), to get a 'pure' normative claim. General claims like "murder is wrong" are underspecified: which murder, exactly, did you have in mind? There may be disputed empirical elements, e.g. whether the killing was physically necessary to save one's own life, or whether a rule against this type of act would tend to promote certain consequences. But those variations do not threaten the core of non-contingency that I have in mind. [See 'Context and Relativism' for further background.]

    ReplyDelete
  3. "we don't have access to the kind of information needed to make well-grounded probability statements about them"

    That's interesting. What sort of information do you think we would need? (Is there some possible information that would solve this problem, to your mind?) Can we at least make well-grounded judgments as to which normative claim is most likely true? (Otherwise we're in trouble!) But if so, what's stopping us - on your view - from also making well-grounded judgments about which normative claims are, say, less likely but still reasonably probable, and which are all but impossible?

    ReplyDelete
  4. "Whether we should, rationally, oppose (say) the killing of fetuses with type-F mental faculties depends entirely on which view is more reasonable, and not any facts merely about our personal psychologies or subjective credences."
    Regardless, an agent who fails to behave as though credences about necessary truths matter will on average wind up regretting more of her actions and wish that she had so behaved.

    Suppose that someone constructs a device that includes a Doomsday Device, a big red button, and a supercomputer capable of calculating pi to an extraordinary number of places. When someone presses the red button, the supercomputer will compute the nth and nth+1 digits of pi (in base 10), where n is some cosmically large number, and if both digits turn out to be 2s, the Doomsday Device will be activated. The designers of the machine selected n randomly.

    Further suppose that I have sufficient empirical evidence to assign overwhelming probability to the proposition that the device is as described above, but lack the computational resources to determine the values of the nth and nth+1 digits of pi. If, in a series of situations such as this (with different values of n), I fail to treat pushing the red button as a 1% chance of disaster I will wind up regretting my alternative decision procedure.

    ReplyDelete
  5. Hmm... wrt the normative supervening on the non-normative, I'm reminded of nothing more than the dualism argument that went around here and other blogs a few months ago. Why can't we borrow your strategy from there, that is, posit empirical-normative bridging laws, and say that those laws can change?

    Perhaps this might be clearer with a less vivid example than the holocaust -- suppose, for example, that we think that utilitarianism is true and that the right way to aggregate the good is by summing. In such a world, per the repugnant conclusion, it would be good to add a bunch of barely-tolerable lives. But we can easily imagine a physically identical world where the right way to aggregate the good is by averaging, and the repugnant conclusion doesn't hold. In such a world, the bridging laws would simply have changed, changing our evaluation of identical physical facts.

    wrt the kind of information needed to make well-grounded probability statements, I don't think there is any sort of information that could make it possible. I'd say we can have ordinal, but not cardinal, rankings of the likelihood of normative truths. (I'm not sure I want to use the word "likelihood" -- that comes loaded with too much probabilistic baggage -- say "plausibility.")

    So, I happen to think both utilitarianism and divine command morality are false, but I think utilitarianism is more plausible than divine command. Nonetheless, I would decline to go so far as to say "much more plausible."

    I'm wracking my brain trying to come up with an account of this gap, in answer to your last question. I'm not sure I have a good one. Perhaps the best I can offer is to turn the focus back to probabilities (as opposed to looser rankings like "reasonably plausible" versus 'all but implausible"). If normative claims can be assigned probabilities, those probabilities must obey bayes rule. Imagine a normative proposition A with prior probability P and posterior probability P*, and evidence for that proposition B. To get P*, we'd need to know the probability of B given A. But, because we're dealing with abstract normative propositions of the form "killing is wrong," B can only be an argument, not some kind of observation. And how do we assign conditional probabilities to the existence of arguments? Arguments aren't facts at all -- not even moral facts.

    That doesn't completely answer your question about the looser rankings, except indirectly -- if the ability to form the looser rankings entails the ability to form probabilities, then by modus tollens, we can't form the looser rankings...

    ReplyDelete
  6. Great comments!

    Paul - I think false theories of value can be ruled out a priori, so we can't coherently "imagine" any such world. (There's much more to be said here though. Maybe I should post on 'moral zombies' sometime.)

    I also want to write on Bayesianism and apriority sometime. (Robin Hanson insists they can be mixed. I'm with you in being skeptical.)

    Carl - neat example! One immediate (if unhelpful) thought is that you wouldn't actually regret it if you instead reason as though the death is certain in the case that it is, and impossible in the other cases. But that's not the sort of advice one can follow, admittedly, so it's a nice case for bringing out the intuition that we need at least a slightly looser understanding of what's reasonable when labouring under such cognitive limitations. (Though note that it's still evidence-based rather than strictly belief-based. Having an unreasonably low credence wouldn't excuse reckless button-pressing, for example, given that you should - in some sense - give it 1% chance of disaster.) Other than that, I'll have to sleep on it...

    ReplyDelete
  7. Richard,

    I'm lost by your comments here on evidence and entailment. Surely you don't have evidence to believe all necessary truths if you believe anything at all? That just seems to assume that evidence is a matter of material entailment, and that just seems false. For example, the evidence for the truth of pythagoras' theorem is far more specific than that. More broadly, logic is not epistemology!

    I also think the following is relevant, but I'm not sure where. Feel free to move it to the other post if it fits better there.

    The miners are trapped down one of two mineshafts (call them H and I), which are all about to be flooded. If you close gate A, you will kill the miners if they are in H, and save them all if they are in I. If you close gate B, you will save the miners if they are in H, and kill them all if they are in I. If you close gate C, you will save 90% of the miners no matter where they are.

    Assume you know all of this to be the case, and yet don't know if the miners are in H or I. Then it follows that you ought to close gate C, even though you know that this is not the action that you ought to take if you had all of the evidence. The implication seems to be that we sometimes have reason to act in ways which there is no objective reason to act. (This example has bugged me for years!)

    Alex

    ReplyDelete
  8. Alex,

    Of course you have some objective reason to close gate C! Saving 90 of the miners is a good thing. What you know is that this reason isn't decisive relative to all of the reasons, since you know that there is a weightier reason to close either gate A or gate B. Richard's view seems to handle this case just fine. The only reason one is justified in believing exists is the reason to close gate C. One's not in a position to be justified in believing the reason to close gate A nor is one in a position to be justified in believing the reason to close gate B (well, there will only be an objective reason to close one of them, since the miners will either be in H or I).

    Making standard assumptions about what reasons there are, I don't see how Richard's view can explain a slight variation on the case. To make it easier, I'm going to use a case with an analogous structure. Imagine that you are offered three envelopes. You are told that there is $15,000,000 in either envelope one or envelope two and that there is $1000 in envelope three. It would be somewhat crazy for you to not choose either one or two. In fact, you are obligated to choose either one or two (that's my intuition, at least). But I don't see how Richard's view can get this result given standard assumptions about reasons. This is because one isn't in a position to be justified in believing the reason to pick one (if there is one) nor is one justified in believing the reason to pick two (if there is one). She is only justified in believing the reason to pick three. It seems as if she rationally ought to pick three, then. But that seems wrong.

    ReplyDelete
  9. Alex - I do think each necessary a priori truth is such that we (epistemically, rationally) should believe it. I'm not sure how to best make sense of non-empirical "evidence" for a priori claims. In the post I suggest none is needed at all, but I might grant your point that there is "more specific" evidence for particular claims, so long as we further add that this evidence is guaranteed to be accessible to us. I should think about this more, though. If we could come up with a good account of a priori evidence that takes into account our cognitive limitations, that might be just what we need to ground the laxer notion of 'reasonableness'.

    Errol - I don't see the problem. It's a fact that (according to your best evidence) you have a 50% chance of winning big bucks if you choose [either envelope 1 or 2]. That fact constitutes an evidence-relative reason to make that choice.

    ReplyDelete
  10. We could think about what sort of human brain a mind with infinite computational resources would construct to perform optimally on the problems (where the values of n for each machine would be determined by radioactive decays unpredictable in advance, so that the answers could not simply be pre-stored in the human brain).

    This would be a sort of rule-rationality, as opposed to act-rationality, and would support the use of subjective probabilities.

    ReplyDelete
  11. Richard,

    I see how you get that result from (1)-(3), but as far as I can tell, (2) does not flow naturally from the earlier post you link to (How Objective is Rationality). In that post, your view seems to be that A has an apparent reason R iff A is justified in believing R and R is the kind of thing that can be an objective normative reason. A is rational iff A does what she has sufficient apparent reason to do.

    If you are making the following standard assumption about objective normative reasons, then I don't see how the fact that there is a .5 probability that the big bucks are in envelope one can be an apparent reason:

    (1 or none): Necessarily, if R is an objective reason for A to x, then R would be on the pro column on the list of things counting in favor of doing x from an all-knowing information-state.

    The proposition that there is a .5 probability that $15,000,000 is in envelope one will never be in the pro column from the all-knowing perspective. That's because one will know which envelope the money is in from the all-knowing perspective. I thought, on your view, apparent reasons are the kinds of things that can be objective reasons. If that's true, and you assume (1 or none), then the proposition about conditional probability cannot be an apparent reason. Thus, in the three-envelope case, the only apparent reason is the reason to choose envelope three. The problem is that it follows from (what I took to be) your view and (1 or none) that one rationally ought to choose envelope three. But that is implausible.

    I guess this is has ballooned into a question of exactly what you take apparent reasons to be. Because I don't think the view you express in (2) is consonant with the view you expressed in your earlier post.

    N.B. I don't accept (1 or none), but it is a implication of most theories of objective reasons.

    ReplyDelete
  12. Premise (2) arguably only makes sense for contingent, empirical hypotheses.

    May I make you argue? There seem to be cases where I can have contingent evidence for or against necessary propositions. I can get evidence that such-and-such is a mathematical consequence of my probabilistic model by running a computer simulation.

    Suppose you read "evidence" in a subjectivist way, so that P is evidence for Q from your perspective just in case (where Cr is your credence function) Cr(Q|P) > Cr(Q). Then we can build a perfectly good model that represents empirical evidence about normative claims. We first give you a bunch of quasi-realist belief worlds, or world-norm pairs if you prefer. Then we give you quasi-credences to distribute over those worlds. And then your quasi-credences fix what's evidence for what.

    I'm guessing that you will find this all too subjectivist to be satisfactory, but I'd like to hear the full story.

    Also looking forward to the Bayesianism and a priority post!

    ReplyDelete
  13. Errol - "I thought, on your view, apparent reasons are the kinds of things that can be objective reasons."

    Oh, right, I guess I don't think that's necessary after all. I'll need to think more about it though -- thanks for bringing the problem to my attention.

    Wizard - yeah, I don't think evidence is purely subjective like that. Also: if Q is some true purely normative claim, then - being necessary a priori - there are not any possible ~Q worlds to distribute credence over. So I guess you mean to introduce impossible belief worlds here? Seems suspect. (And I gather you're also rejecting the principle that one ought to believe what's entailed by one's evidence.)

    On the other hand, I agree with you that computer simulations can give us a posteriori evidence for a priori claims. I can't immediately think how to account for this, so that's another puzzle I need to mull over.

    ReplyDelete
  14. I don't see why impossible belief worlds are suspect. People can and do believe (also assert, wish, wonder about, and debate) impossible claims. There's no technical puzzle here: you can use one sort of object to be metaphysically or logically possible worlds, and a second type of object to be doxastically possible worlds. Is there a good reason to use only one type of object, other than the weak reason that it would be convenient?

    I'm not sure I can grant that moral claims are a priori as well as necessary. Is this unfortunate for me? I don't have a very good grasp of the a priori/ a posteriori distinction. Are a priori claims ones that we can never get empirical evidence for or against? Ones that should get credence 1 no matter what? Ones that we're entitled to believe in the absence of certain types of empirical evidence that doesn't entail them? On the first two readings, I can't accept that moral claims are a priori; on the third reading, I can.

    (And I gather you're also rejecting the principle that one ought to believe what's entailed by one's evidence.)

    I'm going to be totally wishy-washy here: it depends on what you mean by 'ought' and what you mean by 'entailed'. There's a perfectly good sense in which you ought to believe all and only truths, but that principle isn't always helpful or action guiding. In that sense you ought to believe everything that's entailed by your evidence, provided your evidence is true. In a more subjectivist sense, I think it's false that you ought to believe whatever is entailed by your evidence, if entailment means either classical consequence (what if you're an intuitionist?) or truth in every metaphysically possible world where your evidence is true (what if you have odd opinions about metaphysics?)

    ReplyDelete
  15. Richard,

    Very stimluating post, but I dissent from both of its main claims.

    First, you say that one's belief-relative reasons have no normative significance. I'm not exactly sure what "normative significance" is supposed to mean. Yes, one's belief-relative reasons aren't one's objective reasons, nor are they one's evidence-relative reasons (on all but the most internalist conceptions of evidence), but that doesn't mean that they have no theoretical role to play.

    As you say in another post, we aim at acting in accordance with our objective reasons. Maybe that's right. But in so aiming, we are GUIDED not by those reasons, not by our extra-mental evidence, but by our beliefs. What do I mean by this "guided" talk? I mean that the premises of our practical inferences are the contents of our beliefs. So it's helpful, from the perspective of an agent deciding what to do, to have an account of what to do given our beliefs. Without an account, or at least a set of heuristics that help us to approximate one, an agent won't be able to act in accordance with her objective reasons except through (what from her perspective will be) blind luck.

    But if you think I'm wildly off the mark here, let me know. I have to confess, I'm always baffled by serious debates about questions like "Does the man who mistakes petrol for water really have a reason to drink it or not?" It seems like the obviously right answer is "In one sense yes, in one sense no; both senses have their uses, so let a thousand flowers bloom!"

    Your other main claim is that it doesn't make sense to talk about the probabilities (other than zero and 1) of normative propositions, given one's evidence. There are two arguments on the table in support of this claim -- yours and Paul Gowder's.

    You say that normative truths are necessary, which means they're entailed by every fact, which means they're entailed by my evidence, which means that the probability of all normative truths given my evidence is 1. As several commenters have suggested so far, this seems a little strange. There's a clear sense in which our evidence "leaves it open" whether whether certain necessary claims are true. I think there are ways to state the relationship between evidence and the proposition it supports that do justice to this sense.

    First, we could say that the probability of P given E is not determined by what E entails, but by what deductively follows from E, or what inductively follows from E, or what abductively follows from E, or something else -- I'm just throwing out possibilities. I think any of these better capture the idea that if E leaves it open whether P, then the probability of P given E isn't 1.

    Second, we could bring in metaphysically impossible worlds. Suppose that utilitarianism is wrong. Then maybe there isn't a metaphysically possible world in which the non-normative facts are as they are here, but where utilitarianism is right. And perhaps you're right that there isn't even a fully conceivable world like that, although I'm less convinced of this latter claim. But there is, as Carnap would have put it, a "state description" according to which this is true. It's even a consistent state description. It simply describes a world that's metaphysically impossible. But since worlds like this one may be consistent with our evidence, there's a clear sense in which they're not ruled out by our evidence, so there's also a sense in which our evidence doesn't rule out utilitarianism.

    Paul Gowder says that "we can never have the kind of evidential basis that we'd use to give specific probability estimates to empirical propositions". Answering this challenge in a satisfactory way would require a lot of hard work, I think, but let me just throw some rough thoughts out there: If there are normative facts, then those normative facts are evidence for normative propositions (just as physical facts are evidence for hypotheses in physics). So, for example, if it's a fact that it's wrong to kill one person to save five (ceteris paribus), then that might be evidence against consequentialism as a normative theory. So what about, say, normative intuitions? Well, to put it informally, evidence of evidence is itself evidence. That someone has the intuition that it's wrong to kill one to save five is evidence that it's wrong to kill one to save five -- the more reliable the intuition, the stronger the evidence. And as we noted above, the fact that it's wrong to save five may be evidence against consequentialism. Normative intuitions play the same secondary role that observations of physical facts play in physics. If physical fact F is evidence for physical hypothesis H, then the obervation that is evidence of F is itself evidence for H. Of course, you might think that normative intuitions are unreliable. But that just means they're akin to non-reliable observations (or perhaps to unreliable methods of detection like guesses or tarot card readings, etc.)
    And what about normative arguments? Paul says that arguments aren't facts, so they can't be evidence. But the fact that there is a argument for some conclusion is certainly a fact, so I don't think we should rule out the evidential import of arguments on those grounds. All the same, I think there's another reason not to think of arguments as evidence. Suppose there is a valid argument from premises P, Q, and R to conclusion S. I'd want to say that P, Q, and R, together, constitute evidence for S. But the fact that there's an argument is not further evidence. For there to be such an argument is just for P, Q, and R to be a certain kind of evidence for S. So counting the existence of the argument as evidence is double-counting. So are arguments related to evidence at all, if they don't constitute further evidence, over and above their premises? I think so. When I come to know an argument, I'm made aware of the evidential relationship between the premises and the conclusion. So even if the evidence was already "out there", in some sense, it's only part of MY evidence after I've learned the argument.

    I also think there may be other facts that affect the assignments of probabilities to normative propositions that aren't best thought of as evidence. For example, utilitarianism is simpler than W.D. Ross's theory of prima facie duties. Maybe we don't want to say that this is evidence, exactly; instead, it we might want to cite it as a reason to assign greater pre-evidential probability to utilitarianism. I know I'm going out on a limb that's been hacked to death by Goodman and others, but it still might be worth exploring. Okay, I'm going to watch The Wire now...

    ReplyDelete
  16. Wizard - I mean 'rationally ought', i.e. what an ideally rational agent would believe given the evidence available to you. But note that evidence can be misleading. It must be accessible to the agent, unlike truth, but nor is it merely subjective. So, something in between the two extremes you discuss. (As noted in the main post, I don't think pure subjectivism is particularly "helpful" either.)

    'A priori' claims can be justified on non-empirical grounds (without appeal to experience). So something like your third option, if I read it right.

    I think the standard objection to 'impossible worlds' (see, e.g., Stalnaker) is more metaphysical than technical: there are no such objects. People may have inconsistent beliefs, but there is no coherent scenario or world that corresponds to those beliefs.

    Of course, you could always use, e.g., a set of inconsistent propositions for modeling purposes, but it's misleading to call such a jumble of nonsense any kind of 'world'. More importantly, for present purposes, it would seem out of place to think that such jumbles of nonsense belong in our epistemology, e.g. in governing what possibilities our evidence supports. I guess you dispute this last claim, though. I'm not sure what more can be said in support of it.

    ReplyDelete
  17. I see that The Wizard of Oz made some of these same points (more succintly, too ;-)) while I was writing my post. It's a fast-moving world, this blogosphere!

    ReplyDelete
  18. Hi Andrew, thanks for your comment.

    On the different kinds of reasons, I agree that we merely aim for objective reasons, whereas we must be 'guided' by reasons that are actually accessible to us. But I think it is evidence-relative reasons that play this role (i.e. the belief-relative reasons we would have if we were believing as we should). There's something to be said for doing your best according to your available evidence. There's absolutely nothing to be said for doing whatever you (however arbitrarily or unreasonably) believe to be best.

    On the probabilities issue, I think your first response is best. In the second, you claim that necessarily false moral theories are nonetheless "consistent". I disagree. If they're false at all, it must be because of some deep incoherence. The true moral theory is true a priori, and so its verdicts will follow from any 'state description' that includes a full specification of the non-moral facts. If the state description further includes the contrary verdicts of a false moral theory, then we have a contradiction -- not consistent with anything, let alone our evidence.

    ReplyDelete
  19. I'm finding myself unmoved by the standard objection to the impossible belief worlds. I think there are such things (if there weren't how could I use them to represent people's beliefs?), and I don't think it's very nice to call them `jumbles of incoherent nonsense'. Being metaphysically or logically impossible doesn't make a scenario incoherent, any more than being nomologically or practically impossible makes it incoherent.

    The `impossible worlds' rhetoric is flashy and aesthetically nice, but it's actually a little misleading. You could phrase the idea this way instead: Doxastic possibility is the very broadest kind of possibility. There are logically, metaphysically, and morally impossible worlds that are still possible in the broadest sense. Philosophers tend to miss or dismiss a lot of possibilities because they're so busy table-pounding.

    Just to increase the wishy-washiness quotient, I think I'm also worried about whether there's a single standard for `rational'. Here's a very weak bit of a posteriori evidence that there's not a single standard: I wrote a paper that made a lot more sense once I systematically removed instances of the word `rational' and just explained which norms I was gesturing at. `Rational' gets used for a lot of things: valid deductive reasoning, inductive reasoning that's liable to lead to true beliefs, pragmatic reasoning that's liable to get your desires satisfied, provided your beliefs are true, reasoning that helps you avoid Moore's paradox.

    ReplyDelete
  20. Richard,

    This discussion has impressed upon me the need to think more about what it is for reasons to be action-guiding, because we're obviously thinking about the notion in two different ways. Here's my pitch: The premises in any practical inference that you actually make, right now, are the contents of your actual beliefs, right now. They're not extra-mental evidence. They're not what you would have believed if only you'd been smarter, or read more, or been less biased. So there is a need, from the agential perspective, to think about what to do, given your beliefs. Of course you might want to gather more evidence, or think about things more. But the decision to gather evidence or think about things more will also be made from this same agential perspective, where all you have to work with are the contents of your actual beliefs. (And it may even be rational, given your current beliefs, for you to take actions that would yield revisions of those beliefs. See Good's proof of the Principle of Total Evidence, and Skyrms's extremely helpful elabortion thereof.) You claim: "There's absolutely nothing to be said for doing whatever you (however arbitrarily or unreasonably) believe to be best." That's fine for you to say to me, or for me to say to you, but for me to say it to myself is to concede self-defeat.

    Another way of putting the same idea: We bear a certain relationship to at least some of our beliefs. Maybe it's what Block calls "access consciousness". Maybe it's best just to call it "usability". We don't bear this same relation to extra-mental evidence, nor to beliefs that we would have, but don't actually have. This other stuff just isn't in our minds, and so can't be employed in quite the same way in deciding what to do.

    As for impossible worlds -- I see what you're saying. I was thinking that a normative theory could be false, even necessarily false, without being inconsistent. You seem to think that a normative theory that's false has to be "deeply incoherent" (which I take to mean "(logically) inconsistent", otherwise there's no problem with a consistent state description of an impossible world in which this false theory is true). Anyhow, on the bigger issue, I guess I'm with Wizard -- I don't really see the problem with impossible worlds. You say in a previous comment: "...there are no such objects. People may have inconsistent beliefs, but there is no coherent scenario or world that corresponds to those beliefs." But an inconsistent story is still a story -- it's just a story that isn't possible -- so why can't an inconsistent scenario still be a scenario?

    ReplyDelete
  21. Hi Andrew, I was meaning to emphasize that evidence is by its nature 'accessible' to the agent, and so it is there for us to "work with" or "use" in our practical reasoning. Beliefs are (good or bad) conclusions we reach on the basis of our evidence. There's no theoretical need to take the beliefs themselves as given, as the starting point of inquiry. Rather, we are to start from our evidence, and draw the appropriately conclusions from there. If we are believing reasonably, then this will come to the same. The only difference is that unreasonable beliefs are not licensed as legitimate premises for practical reasoning.

    This does mean that a person will not always appreciate when they are being unreasonable. But we already knew that: even on your view, someone might reject basic laws of logic, or deny that he has belief-relative reason to X even though he has some credence in a hypothesis H which (he knows) implies that he has objective reason to X.

    Nobody is wholly subjectivist about action-guidance. I'm sure that you would reject, for example, the following pitch:

    "The logical form of any practical inference that you actually make, right now, is a rule that you are currently disposed to follow. They're not extra-mental norms. They're not what rules of inference you would follow if only you were thinking a bit more clearly, or wrote up a truth table, or could distinguish affirming the consequent from modus tollens. So there is a need, from the agential perspective, to think about what inferences to make, given your dispositions."

    It's not 'guidance' to merely tell an agent what they are already doing (or inclined to do).

    ReplyDelete
  22. General thought about the disagreement between Richard and Andrew: what we want is rational standards that are not too easy to meet, but not too hard to meet either. Richard is taking "not to easy" really seriously: why bother with a norm of rationality so weak it permits errors about a priori subject matter like logic and ethics (setting aside my personal worries about a priori for the moment)? Andrew is taking "not to hard" really seriously: how can anyone be expected to satisfy a norm they don't know how to satisfy?

    Having said this, I don't have much to say about how to resolve the conflict; in fact, with my rational pluralism, I'm committed to the claim that there's no unique way to resolve it. (Jeez, this is getting too wishy-washy even for me. Also, jeez, I should be revising a paper.)

    ReplyDelete
  23. OK, one more comment, then I'm done.

    Wizard -- You've got my position right in your first paragraph. But as I suggested in previous comments, I agree with you that there are all sorts of different senses of 'rational', 'should', 'a reason', and so forth, many of which are useful; my only claim is that this belief-relative sense is *one of them*, and that it's useful in the way that I've said. Basically, I want to let a thousand flowers bloom, and Richard wants to exterminate my flower like a vile weed. ;-)

    Richard -- The stance you adopt in this debate leads me to the following conclusion: You believe in the possibility of zombies because you ARE a zombie. Kidding, kidding... Anyhow, I think it would be better to argue about this over coffee sometime. I'll send you an e-mail.

    -Andrew

    ReplyDelete
  24. Richard,

    You say that premise (2) "arguably only makes sense for contingent, empirical hypotheses."

    Let's say Alice is a math professor, and in a class, Bob (one of the students) asks whether the Continuum Hypothesis follows from the ZFC axioms.
    Alice replies that it does not, and neither does her negation, but that's beyond the scope of their course. She still recommends Bob some books where he can find a proof.
    It seems clear to me that given Bob's evidence - namely, Alice's testimony, and his knowledge about Alice's expertise -, there is some non-zero probability that neither the CH nor its negation follows from the ZFC axioms.
    Generally, expert testimony provides evidence of mathematical truths.
    On the other hand, it would not be proper to assign probability 1 to every claim made by a math professor, since math professors sometimes make mistakes too.
    So, it seems to me that usually there is some non-zero but less than 1 probability.
    In fact, such testimony can provide such evidence even if the person asking the question does not have the knowledge and/or intelligence required to understand a proof.

    ReplyDelete
    Replies
    1. Yeah, as a matter of non-ideal rationality that seems right. On the other hand, there's an obvious sense in which it's ideally rational to believe whatever is actually the mathematical truth of the matter. This becomes clearer if we model rational credences as probability distributions over possible worlds (and in no possible world is there a mathematical falsehood).

      Delete
    2. I see some potential difficulties with that approach (even granting for the sake of the argument a number of things), such as:

      1. As far as I can tell, the approach of modeling rational credences as probability distributions over possible worlds seems to be problematic even when modeling first-order languages (e.g., http://arxiv.org/abs/1304.2341 ) in which the number of truths is countable, and there is set of possible worlds. But I'm not sure how you'd go about such modeling when there are more truths than any cardinality, or where possible worlds might not form a set for all we know, etc. Or do you think there is a set of possible worlds, and that the number of truths has a cardinality?

      2. Assuming there is a way around 1., one can reason as follows: just as there is no possible world in which there is a mathematical falsehood, there is also no possible world in which a necessary truth that is not knowable a priori is false. But then, going by the same argument, it would seem to be rationally ideal to believe the necessary truth of the matter, regardless of whether it is a priori or a posteriori. How do you exclude a posteriori necessities?

      3. It seems to me that what is a priori and what is not may well depend on the agent. For example, humans have computational limitations that exclude mathematical truths from a priori knowledge. But it seems to me the issue might go beyond computational limitations. For example, there are arithmetic truths that do not follow from any axioms we are at this point familiar with, even if those truths are not beyond our computational capabilities. It's not clear to me that those truths can be known to a human being.
      Granted, you might try an agent that is ideal with respect to knowledge, but an agent like that might be an omniscient agent (if omniscience is possible, which might or might not be the case for all I know), and an agent like that may well not need (for all we know) any empirical data. In other words, it might be that all truths are a priori to her, and so she ought to assign probability 1 to every truth (if she ever assigns probabilities). Or it might be that at least all necessary truths are a priori to her, regardless of whether they are a priori to us, so she should assign probability 1 to all necessary truths.

      That said, you might be able to get around 3. by showing that omniscience is impossible. But that one is difficult, and I don't see a way around 1. or 2.

      Delete
  25. Or maybe you're also excluding a posteriori necessities from (2)?
    I don't find the OP clear on this, because you say "contingent, empirical", but your explanation focuses on the a priori/a posteriori distinction.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.