Sunday, November 01, 2015

Self-Undermining Skepticisms

Radical skepticism has a curious tendency to undermine itself:  If you can know (or justifiably believe) nothing at all, then you cannot know (or justifiably believe) even that.  So it seems that one cannot coherently take oneself to be incapable of forming justified beliefs.

More limited forms of skepticism might hope to avoid this fate. But it can prove difficult to halt the slide once you've started down that route.  Consider Sharon Street's epistemic argument against moral realism, which we might reconstruct as follows:


(1) Coherent moral diversity: there is more than one possible internally-coherent normative stance that could survive (procedurally) ideal reflection.

(2) The normative facts are causally inefficacious, and exert no influence on our normative beliefs that might make them tend towards accurately representing the truth of the matter.

(3) Genetic Debunking: If (one learns that) a belief is caused (and maintained) by factors that are insensitive to the truth of the matter, and there are no independent grounds for expecting one's belief to reliably track the truth, then one should give up the belief.

(4) If the normative facts hold independently of our subjective viewpoints, then there are no independent grounds for expecting our normative beliefs to reliably track the truth.

So, (5) Either we lack normative knowledge, or realism is false: the normative facts do not hold independently of our subjective viewpoints.

At first glance, it seems a fairly compelling argument. The problem arises when you notice that the argument could be run again taking the belief in genetic debunking, rather than normative beliefs, as its target. However substantively plausible you may find it, such a sophisticated epistemic principle can certainly be coherently denied. So the topic admits of coherent diversity in opinion.  Like pretty much all abstract philosophical claims, the fact of the matter makes no causal difference to the world.  So, one's belief in genetic debunking is insensitive to the truth of the matter. There don't seem to be any independent grounds for expecting one's belief about debunking to track the truth.  So, by genetic debunking's own lights, one should not believe the principle.

Street's own response is to try to go constructivist "all the way down", holding that the genetic debunking (and constructivism itself, for that matter) is not a mind-independent truth, but one that is true for an agent just insofar as every coherent outgrowth of their current belief set ultimately commits them to it. So she must deny that coherent diversity of opinion on the matter is possible (otherwise these principles need not hold true for the targets of her arguments).  But this is not a feasible option. As I show in 'Knowing What Matters', there is a route for the robust realist which is clearly coherent; the only question is how substantively plausible it is, but that question doesn't matter for the purposes of establishing that Street's principles are self-defeating.

The upshot seems to be that, so long as moral realism is coherent at all (as it clearly is), then it is defensible against most epistemic objections.  For whatever skeptical epistemic principle you try to invoke against moral realism will (as a similarly abstract, philosophical claim) presumably also apply against itself.  (Can you think of a more targeted epistemic objection that isn't self-undermining in this way?)

If we combine this with the observation that metaphysical parsimony arguments don't work against moral realism, the view ends up looking a whole lot more defensible than one might initially have expected...

94 comments:

  1. I don't see a good reason to admit that the normative facts are causally inefficacious. If the argument is that any apparent causal efficacy can be explained by other non-normative facts, then I would have to say that the fact that something is red does not cause my belief that it is red, because my belief can be explained by factors that do not refer to anything's being red.

    But despite that, the fact that something is red still manages to cause my belief that it is red. So why shouldn't the badness of murder cause me to believe that murder is bad, despite their being a theoretical causal path that doesn't refer to murder being bad?

    ReplyDelete
    Replies
    1. Yeah, you might be sympathetic to my old post on epiphenomenal explanations. But for current purposes, it seems sufficient to note that there is (presumably) no general causal pressure towards believing the moral truth, even if some of the causes of our beliefs could be re-described as moral causes. The badness of murder could just as well cause Caligula to believe that murder is good, after all. There's no inherent tendency for agents to get things right, on moral (or other philosophical) matters, I take it.

      Colour is different precisely because it doesn't claim to be robustly mind-independent. E.g., on a response-dependent account, the accessible fact that you're inclined to judge that X is red is some evidence that X is such as to incline normal observers in normal conditions to judge it red, etc. (assuming you've no special reason to believe your self or viewing conditions to be abnormal).

      A response-dependent account of morality could make the same move, but that's to give up on the sort of robust realism about normativity that I'm interested in defending.

      Delete
  2. Hi, Richard,

    I think if the target is Street's target (i.e., what she calls "uncompromising normative realism", or UNR) a better objection can be found in some of the examples in Street's own papers.
    For instance, she mentions an advanced social insect, which probably would have values radically different from ours.
    Now, for a number of reasons I think it's improbable that advanced aliens would evolve from something like social insects (though it would happen in a sufficiently big universe, like an infinite one). However, there are plenty of other alternatives. If aliens evolved from things like orcas, elephants, or octopuses, they will likely have very different evaluative attitudes (from one another and from us).
    Yes, they will all likely say "pain is bad" (though, whose pain? That will likely differ too), but when assessing situations in which, say, orcas die of starvation vs. seals get brutally torn apart by orcas (or elephants vs. lions, etc.), their judgments are likely to be very different, not to mention the judgments of obligation, etc.

    I would say this is not likely to go away if they advance even further and become star-faring civilizations, since even if they adapt to their environment, that includes their social environment, which is very different to begin with, and would likely continue to be so as they advance while valuing keeping many of their psychological traits (which they probably will).

    So, the question would be: what are the odds that humans ended up with something like the right evaluative attitudes?

    For example, if one of those orca-aliens, a human, and one of the elephant-aliens get into a moral debate, it seems their diverging moral senses will probably not allow them to reach agreement ever, since the individual of each species would be making assessments based on their own faculties (i.e., the moral sense, or whatever one calls it), and those lead to different results.

    From a different perspective, an argument goes like this: The UNRist (who is not a theist or something like that) is (in practice) committed to the claim that evolution is extremely likely to produce a reliable moral faculty that tracks the mind-independent moral truth in any species that makes normative assessments. Given that, they seem committed to the claim that most aliens that make such assessments (if there are, were or will be any) have approximately the same values (in the relevant sense; they might value meals that taste very differently, but that's another issus) and make the same normative judgments as we do, and moreover, in case of disagreement, in nearly all cases at least the matter could be (at least in principle) resolved through reason, and aliens from different species would reach the same conclusions eventually.

    But those exobiology claims are clearly not warranted.
    Of course, the UNRist may deny that they're not warranted; I would disagree, but personally, I think that's a much more interesting sort of discussion - one that does not revolve around arguments that may well be vulnerable to self-defeating objections, but takes the debate into a direction that is much less frequently addressed, but (in my assessment) represents a much bigger challenge.

    (theists would likely have those exobiology commitments as well, but I'm leaving that aside because I don't think it's so relevant in the context of these discussions).

    ReplyDelete
    Replies
    1. Hi Angra, you might be interested in my previous post on Street's "normative lottery" argument. I deny that moral realists are "committed to the claim that evolution is extremely likely to produce a reliable moral faculty that tracks the mind-independent moral truth in any species that makes normative assessments." We may instead believe human-like beings to be, in a sense, "lucky", or amongst the moral "elect". To rule this out, you must appeal to a contentious (and coherently disputable) epistemic principle, which again risks being self-undermining.

      Delete
    2. Thanks for the link. Very interesting arguments. I'll try come up with a more elaborate answer later, but for now, some preliminary thoughts are:

      1. It seems to me the UNRist is at least committed to the view that in case there are (or were, or will be) aliens that make normative judgments, either most of them (with some clarification for the infinite case) make nearly the same judgments we make (after reflection), or else at least half of them got an unreliable sense. Is that assessment correct?
      If so, one can then reckon that that claim itself seems very improbable.
      2. Even if the aliens "seem generally sympathetic and altruistic, concerned to promote the wellbeing and non-harmful life goals of other sentient beings", the realist who says they're on the right track seems to me committed to the idea that given reflection, they will reach the same place we would (also, given reflection), rather than (for example) end up with considerably different orders of the goods and bads (e.g., they think A is good and B is good and so do we, but they rank A as better than B, whereas we do the opposite).
      3. I don't think most aliens are likely to diverge from us as much as in the color case (social aliens had to resolve problems more or less similar to ours, which gives some further similarities), but enough to reach very different judgments after reflection in many cases. So, they may well not end up enjoying torture for fun (though they might enjoy hunting us for fun and not deem it wrong after reflection; but let's say not), and may well say "pain is bad", but even a, say, 65% overlap after reflection would be far too low - i.e., one of those species would not have a generally reliable moral sense, and the matter could not possibly be corrected after reflection, given that each individual reflects from her own faculties.
      In short, even granting "pain is bad" is much more probable than "pain is good" (and that's part of the overlap we should expect, except maybe for the "whose pain?" issue), there are zillions of other judgments, like: "is it better if the lions eat and the elephant suffers as she's torn apart? Or is it better if the lions starve to death?", and so on.
      4. One can raise an objection to your argument by means of a color parallel - i.e., what about "uncompromising color realism"? (UCR)
      The UCRist might say that humans are the lucky ones with the right color vision, and nearly all other animals on Earth got it wrong. But that surely is wrong-headed. What do you think the relevant difference is?
      Going by your reflective equilibrium proposal, I suppose that you might reply that the most plausible conclusion in that case is that we get generally true color beliefs, but the other animals don't get it wrong, either - they get something other than color, even if it's similar to it.
      But then again, by the same procedure, the non-URN may reach the same conclusion in the case of morality. I know I do; the idea that those aliens would get it all wrong and we got lucky seems extremely improbable; it's far more probable that they have something akin to morality, but not quite the same. What do you think the relevant difference is?

      Delete
    3. 1. Yep, that's the view. (Anyone remotely sympathetic to moral realism will not find it "implausible" that if beings disagree, at least one of them is wrong. So I just don't share your intuitions here at all.)

      2 & 3. Fair point, that's a reason to mitigate our confidence that social beings will generally get the details right -- more detail about their make-up would be needed for that. But one can repeat the general argument at a finer grain of detail as needed. My claim is not that many possible beings would get it right. Rather, it's that we can, in principle, identify some set of naturalistic features that (i) can be expected to cause roughly the right (by our/my lights) moral judgments (ii) apply to humans (or, more specifically, to me). So we can give a (question-begging, not independent) explanation of why we came to have true moral beliefs, which I claim is all that can be reasonably demanded.

      4. UCR seems unmotivated. Colour seems like a paradigmatically subjective phenomenon. Moral realism, by contrast, is motivated by the thought that, e.g., you really shouldn't engage in gratuitous torture no matter how good an idea it subjectively seems to you.

      Just imagine asking folk the following two questions:
      (i) Could torturing babies be wrong even if it seemed right to everyone (even upon careful reflection)?
      (ii) Could an object be red even though it seemed green to everyone (even in ideal viewing conditions)?

      I think (and I don't believe I'm idiosyncratic in this respect) that the answers are (i) obviously yes, and (ii) obviously not.

      Delete
    4. 1. I don't think they disagree. They would have a dispute perhaps (as one can have with a lion), but that's not a disagreement in the relevant sense (except each of them may believe the other is mistaken if they are UNRists, in which case they do disagree about whether the other agent is making false statements). I guess we may have to agree to disagree on that.

      4. I think a naive pretheoretical view is something like UCR.
      That aside, I also agree that one shouldn't engage in torture for fun (if that's what you mean by "gratuitous"), regardless of what it looks to you. In other words, it's immoral to do that. Well, it's immoral for humans, that is. Not for lions. And I'm not sure about orca-aliens (are they even moral beings). But for that matter, I would say that stop traffic lights are in fact red, regardless of what they look to you, or to the orca-aliens. They may well not be orca-aliens red, but that's not the same as red.

      Regarding your questions, there are two difficulties:

      a. In (ii) you're using the term "ideal", which may suggest a proper assessment, whereas in (i), you're talking about "careful reflection", which does not have the same implication. On the other hand, if "ideal" has no such built-in assumption but it's about ideal viewing conditions, then maybe all people are killed by aliens except for color blind people, and so red objects will look red to everyone even under ideal viewing conditions. You may want to find a way around this by talking about properly functioning systems or something like that, but in that case, I don't know that you'll get the answers that support UMR but not UCR.

      b. In the moral case, you're already presenting the specific example, and it's an example in which we already deem the behavior morally wrong. On the other hand, in the color case, you're talking about a hypothetical object, rather than pointing at an actual, specific object that we already judge to be red. A closer example (but still with the difficulty I mention in a.) would be:

      (ii)'Could this apple be red, even if it looked green to everyone, even under ideal conditions?

      But even that would not overcome the problem, because then people would be inclined to think that someone has altered the color of the apple in the picture, if it looks green to everyone "under ideal conditions".
      On the other hand, people already believe that we have properly assessed that torturing babies for fun is immoral.

      Perhaps, you could try asking:

      (ii)'' Could an object (under ideal conditions) reflects light of 700 nm be red, even if it looked green to everyone under ideal conditions?

      But in this case, the answer might be "yes" - or maybe "no", but it seems to depend on whether people tend to think science got it wrong, or that the "ideal conditions" do not require proper functioning of the visual systems in question (because otherwise, we already know that a properly functioning visual system under ideal conditions will see the object as red).


      Still, let's say that UMR is much more probable than UCR (in terms of priors). Even then, and after further consideration, I think Street's lottery objection (though modified to some extent) succeeds; I will give an argument in another post.

      Delete
  3. Well, this had better not be an argument against *moral realism*, since (2) precludes the possibility of naturalistic moral realism, right? I don't just say this to be nitpicky: it might just be that the genetic-debunking principle provides the realist with powerful reasons to go naturalist. Moreover, if Street's argument is to work against moral realism, it had better not include a premise that just eliminates one prominent version of the view!

    ReplyDelete
    Replies
    1. The argument needs to be complicated slightly to deal with naturalistic moral realism. But the basic route to extending it appropriately is, I think, tolerably clear: The kind of "causal efficacy" you get by identifying normative properties with natural ones is not of a kind that exerts any general causal pressure towards having true normative beliefs. The naturalist and the non-naturalist will presumably agree about what natural causes are behind our normative beliefs (which may include evolutionary forces, enculturation, etc.) If there's no independent reason to expect that these natural causes will lead us to accurately identify the normative valence of natural properties (say if non-naturalism were true), this remains equally so even if some natural properties are also normative properties.

      Delete
  4. If you choose this strategy to defend moral realism against Street's argument, notice that you're required to be *actually skeptical* of genetic debunking. It's not enough to brush off the anti-realist by observing that genetic debunking is vulnerable to a formally similar objection, so go home and stop bothering me. As long as you believe that genetic debunking is reasonable - as in, you think it's a fair charge to make against views you don't like - you will have trouble defending moral realism. You seem to be saying: All you need to do to defend moral realism is to debunk genetic debunking - as if we'd be prepared to live with the consequences of going without it. I am not.

    ReplyDelete
    Replies
    1. Yes, I'm actually skeptical of genetic debunking. What do you see as the untoward consequences? The sorts of things I want to be skeptical about (supernatural beings, etc.), I'd rather justify my skepticism on substantive rather than genetic grounds.

      Delete
    2. After bars closed one night, I was waiting in line for a slice of pizza. A wobbly guy ahead of me told me about how the world is about to end, and how he saw it all in a vision (which featured angels, etc.). I asked him why he thought that what he saw in that vision was really true. This kind of surprised him, but he insisted that "these kinds of things are not lies!" You suggest he instead should have said: "Hey, you're trying a genetic debunking of my thesis and I won't have it. So what if all my evidence could be explained by all the meth I was on at the time? That has no bearing on the justification of my belief. If you want to contest my vision, you have to refute my beliefs based only on their content, not on the mechanism which produced them." Am I getting this right?

      Delete
    3. Ha, thanks -- yes, it does sound a bit awkward when you put it like that! :-)

      Having said that, it may not be as bad as it seems. We should certainly ask people why they believe the things they do. But the relevant sense of 'why' is to ask for epistemic reasons, not causal explanations. We should give very low prior credence to the world's ending tomorrow, and unless the wobbly guy can offer some real evidence to cause us to update our beliefs, we can reasonably judge him to be crazy. Someone's having a vision of that sort is not good evidence that the world will end tomorrow. (The fact that he was on meth at the time is really neither here nor there.)

      Delete
    4. I wonder how you would characterize our "epistemic reasons" for thinking the world will not end tomorrow. Since I believe the principal principle, and I don't know how to give a non-circular justification for the conclusion that 'Probably, tomorrow will resemble the past', I have a hard time coming up with a properly epistemic justification for my credences regarding tomorrow.

      My larger point was that an effective fight against the skeptic requires charting a middle course. If you give the skeptic too many debunking tools, they will succeed at undercutting even genuine justification. But if you nuke too many debunking tools, then the skeptic has a new winning move: Cataloging for you of the great diversity of stupid views that are just as undebunked as your own, and reminding you that your principles require you to suspend judgment among equally justified possibilities. A debunking of genetic debunking sets us up to lose to the latter skeptical strategy.

      Delete
    5. Yeah, I don't think the justification has to be non-circular (you have to hit bedrock sometime). It may just be a brute epistemic fact that some priors are objectively more rational than others.

      Epistemic objectivism seems pretty immune to either skeptical strategy. Again, I think stupid views are better rejected on substantive than on genetic grounds.

      Delete
    6. This comment has been removed by the author.

      Delete
    7. Aren't you worried that if you allow circular justifications, many completely insane views are going to turn out to be justified on substantive grounds? I went back and followed your link, and I endorse all the wishes you express there, but I'm concerned that your concessions here make the rejection of any priors even harder.

      Delete
  5. Does showing that genetic debunking is self-undermining show that we are justified in disbelieving it? I get the feeling that this might be a case where we are not justified in disbelieving it (because our other norms tell us to accept it) and not justified in believing it either (because it tells us not to accept it itself). I think it is hard to deny that this is ever possible (consider our situation with respect to Boltzmann brains), so it seems plausible to me that it is whats going on here.

    ReplyDelete
    Replies
    1. An interesting possibility! Seems an awkward position to be in, though, so if we can get by without it (or any other norms that would entail it), that would seem preferable...

      Delete
  6. As I see it, a modified lottery argument succeeds. It goes as follows:

    In the lottery case, we have:

    L1. For any given number m (between certain fixed numbers), the probability that the lottery mechanism picks m on a single try is almost zero.
    L2. The probability that there is at most 1 winning number is almost 1 (by "winner" I exclude lower prizes, shared prizes, etc.)

    Given that, the probability of having the winning number (absent other pieces of relevant evidence, as in the example) is almost zero.

    Let's consider now the color case.

    C1. For any color sense CS, the probability that evolution produces CS on a single run is almost zero.

    That is actually analogous to the lottery case.
    If we assume:

    C2: The probability that there at most 1 winning color sense (i.e., the correct color vision) is almost 1.

    Then, the chances of having the winner are almost zero, absent other relevant evidence. Moreover, the prior credence we properly give to our own color beliefs will not overcome this - the problem remains, in my assessment.
    Even the probability of getting the right color sense or one that, say, overlaps by 90% or more with it is very low.
    However, that doesn't lead to color skepticism, because C2 is false (i.e., it's an improper probabilistic assessment). Here, C2 has to go - that's where the analogy between the lottery and color ends.

    The color case and the moral case seem relevantly analogous here.

    We have:

    M1: For any moral sense MS (I'm talking about the deliverances after epistemically rational reflection), the probability that evolution produces MS on a single run in which a moral system is produced, is almost zero.

    Now, what about "pain is bad"?
    I would ask "What about the pain of agents who deserve pain?", but leaving that aside, let's say that the probability of a moral system that will yield "pain is bad" is close to 1. Even then, the probability of a specific MS (with all of its results) is almost zero.
    But what about the probability, given a fixed system MS, of a system MS' with a 90%+ overlap in the cases that the evolved entities will entertain?
    It's at least low. So, we have:

    M1': For any moral sense MS, the probability that evolution produces, on a single run in which a moral system is produced, a system MS' that overlaps at least by 90% with MS in the cases the evolved entities actually entertain, is considerably lower than 0.5. [but just "lower than 0.5" will do]

    If one assumes also:

    M2: The probability that there at most 1 winning moral sense is almost 1.

    Then, the chances of having it are low. Even if our first-order moral beliefs are prima facie justified, on the face of that challenge, it seems to me that skepticism would be epistemically required - again, assuming M2.
    Maybe the probability of getting at least 90% close to the right system is not as low as in the color case, but still low. But the problem here is M2 - even if UMR has a higher prior than UCR, it's still not high enough.

    So, as I see it, the (modified) challenge succeeds, without having to resort to any questionable epistemic principles.
    At this point, the UMRist might want to question M1 and/or M1'. I don't think that a challenge like that succeeds, but it would take the debate into the territory of what one should expect from evolution, which is in my view a better place.

    ReplyDelete
    Replies
    1. I don't know what epistemic principle you're implicitly relying on to get from M1 (which I'm happy to grant for sake of argument) and M2 to the conclusion that our moral beliefs are unjustified, but I suspect it'll look a lot like Genetic Debunking, and I expect I'll reject it for the same reasons.

      Most views are wrong. My view is a view. It doesn't follow that my view is likely to be wrong. It depends on the substantive content of what my view is!

      Delete
    2. I'm a little puzzled about how this challenge is supposed to succeed, as well. What the challenge has to be to is the claim that the human moral sense is the lucky ticket, so to speak; but the 'challenge' seems to amount to no more than saying that the lucky ticket in this case would have to be a lucky ticket -- which doesn't actually pose a challenge to the original claim unless we are adding some further assumption.

      Delete
  7. Richard,

    I don't think I'm implicitly relying on any general "genetic debunking" principle in the case of M1 (or C1, or the lottery). I'm making an intuitive probabilistic assessment in that specific case, given the information available to me, that our moral beliefs would be unjustified (but they aren't) assuming M2 (i.e., assigning probability 1 or almost 1 to M2); but that's not the right assignment.
    If I get your reply right, you're saying that the content of our beliefs - the first-order moral ones - and/or the high prior that we give to them would justify the conclusion (given M1 and M2) that we almost certainly got the correct one. I have to say I find that puzzling - i.e., intuitively, that's clearly not right, in my view.

    That said, I admit that given that you don't find the argument persuasive, at this point I don't have any way of improving it. I thought I could point to the UCR analogy, but you didn't agree at first, and I don't know what you think about my objections to your objection to the parallel, so I guess we'll just have to agree to disagree on this.

    Just a couple of brief questions: do you think the alien orcas (for example) should (epistemic "should") reach the conclusion that they got it right? Or do you think there are some other features that might make the debunking argument successful in their case?

    ReplyDelete
    Replies
    1. Richard,

      I've been thinking about how to further explain the argument, and I think that perhaps, the following will help:

      From M1 and M2, it follows that the probability that evolution will produce the right moral sense on a single run is extremely low (because that's one specific moral sense), so given some species S that evolved to make moral judgments, it's almost certain - before we consider further pieces of evidence - that S got a wrong moral sense.
      So, in order to avoid that conclusion in the case S=humans, our first-order moral views (or perhaps, some other pieces of evidence) would have to be taken as evidence that would overcome that assignment to the point of increasing the probability that we got the right moral sense from close to zero to close to one.
      But it's hard to see how our first-order moral views can do that.
      Apart from the fact that judgments like "pain is bad" are probably one of those shared by nearly all species that got the wrong moral sense (given that that one is very likely to be produced (assuming the objection "the pain of agents who deserve pain is not bad", or similar ones, fails), it's difficult to see how you would introduce our first-order moral judgments - or other pieces of evidence - as evidence against that.

      I guess someone might start with the view that we got the right view, and M2, in order to insist - even after factoring in M1 - that we got the right view? Or somehow you consider them all at once?
      I don't see how that might work, though.

      Delete
    2. I think that credences based solely on genetic considerations are very fragile, and easy to outweigh by bringing in substantive considerations. Suppose a demon rolls a million-sided die and will instill in you the false belief that grass is purple if the die lands on any number but one. I should initially have very high credence that you have a false belief about the colour of grass. But I then learn that you believe that grass is green. I now have a very high credence that you have a true belief about the colour of grass. The genetic info is basically worthless once I learn what your beliefs actually are -- I can then simply assess them, rather than relying on the much coarser-grained info about where they came from.

      In the same way, even if I initially expect that a being will have false normative beliefs (suppose I believe the vast majority of beings will evolve to believe that pain is good), if I learn that a being believes that pain is bad, I will now conclude that their belief about the valence of pain is correct, now that I know what their belief actually is. (And I take it to be a priori that pain is bad, even if many agents would evolve to be substantively irrational and so fail to realize this.)

      I think the alien orcas (or whatever) are substantively wrong (and irrational). They can rightly dismiss the debunking argument -- there is no such formal/structural problem with their beliefs. They just got it wrong on the substance, and so ended up believing things that they shouldn't. But this isn't the place to go into the details of my positive view. Check out my linked (in the main post) paper on 'Knowing What Matters' if you're interested.

      Delete
    3. As in the case of your color/torturing babies questions, I don't think the demon case as you present it is relevantly analogous to the debunking argument.
      A more analogous case would be as follows:

      Let's say that I learn (i.e., I properly assign probability almost 1 to the hypothesis that) a demon rolls a million-sided fair die which will instill in me a false belief about the color of the grass if the die lands any number but one, or landed on an edge. Moreover, in that case (i.e., any case except for 1), the demon will also give me a color perception system that will make the false belief look true if I look at the grass, even under normal sunlight, artificial light, etc. In fact, the demon will also alter my memories as required, so that I will have no memories conflicting with the false belief about the color of the grass.

      Upon learning that, I believe the grass is green, and it looks green to me. Should I believe that the grass is green?
      Surely not. In fact, I should believe it's almost certainly not green. That's an epistemic "should", just to be clear. It would be epistemically irrational on my part to believe that the grass is green, or to fail to believe (after considering the matter) that grass is not green.

      Okay, I picked "false" belief just to match your purple example; in the evolutionary case, the issue is what percentage of the beliefs (after reflection, etc.) we could expect to be false, so the analogy breaks down at that point (i.e., we shouldn't just conclude that what we think so far is wrong, is actually obligatory).
      But still, my demon scenario is a lot more similar to the debunking argument we're discussing than your demon scenario, since in your demon scenario, you still have no reason to doubt your own color perception, beliefs, etc., because in your scenario, the demon isn't messing with your faculties, he is only messing with my faculties, and you are the one assessing whether my belief about the color of grass is true. So, of course the origin of my belief is of no little consequence to your assessment on the matter.

      On the other hand, in the evolutionary debunking argument - just like in my demon scenario - the demon (or analogue) is messing with the moral sense/color sense of all the agents making the moral/color assessments (or allegedly making them; I think the alien orca would make alien-orca-moral (rather than moral) assessments, but assuming otherwise).

      Thanks for the link; I don't have time at the moment, but I will read it and consider your arguments as soon as I can.

      Side note: briefly, and just to clarify a point about my scenarios and my view, my alien orcas do not believe that pain is good. Rather, they have a system of norms and evaluation that (even after reflection, with no epistemic irrationality when they start with their priors) leads to judgments very different from ours. Now, I think their system is moral-like, but not moral, and they're not wrong.

      Delete
    4. Minor correction/clarification point: the question is: should I continue to believe that the grass is green? And the answer is "Surely not. I should come to believe that the grass is almost certainly not green", etc.

      Delete
    5. I agree with your verdict in the grass case. The moral case is disanalogous. In perception, our colour sense is the evidential basis for our colour beliefs, so an unreliable faculty undercuts all the evidence we have. (I take it we have no a priori reason to expect grass to be green, and we're bracketing indirect knowledge sources such as testimony, etc.) Ethics, by contrast, is a priori. The evidential basis (such as it is) for believing that pain is bad (or whatever) is not my psychological sense that it is so, but rather, the self-evident proposition itself.

      A math case would be better. Suppose you know an evil demon will most likely make you have crazy irrational mathematical beliefs and associated poor reasoning. As it happens, you're lucky, and he doesn't. You subsequently prove the Pythagorean Theorem. Are you justified in believing the conclusion? You presumably think not. But I think so long as you really are mathematically competent, and successfully prove the theorem, then you're rational in believing it. I think you can even reasonably infer from the cogency of your mathematical reasoning to the conclusion that you were lucky and the demon didn't mess you up after all. And you can do this even though you know that your million deluded counterparts would all similarly (but, in their case, invalidly) infer from their unwittingly deluded mathematical scribbles that they were the lucky ones who were actually reasoning well. It matters, on my view, whether you really are reasoning well.

      Delete
    6. Okay, so it seems we agree some genetic debunking arguments work, but we disagree sometimes about when they do.

      Your theory (if I get it right; please let me know if I don't) is that genetic debunking arguments fail (perhaps, among other cases) when we're talking about a priori beliefs.

      My preliminary theory is that some genetic debunking arguments (but I'm not entirely sure; I'm still considering the matter) might risk being self-defeating by means of (at least implicitly) risking skepticism about/defeat of epistemic rationality. Again, I'm not at all sure on the matter, so I'll have to give it further thought.

      Now, some (not all) genetic debunking arguments involving math (or more precisely, logic) are among of the few that seem to have that epistemic risk. In particular, I'm not sure about your math demon. Still, he might be able to undermine the belief by undermining my short-term memory (including the memory that I proved the theorem). But then again, it may well be that undermining my short-term memory has the problem of undermining epistemic rationality, at least if the memory change is vast (in my color demon case, it seems to me the potential memory alterations weren't big enough to be a problem, while the modified evolutionary argument I offered doesn't involve any undermining of our memories and/or ability to do logic, Bayesian updates, etc., so it doesn't have that difficulty).

      I don't know if there is another category of debunking arguments (i.e., other than those that risk skepticism about/defeat of epistemic rationality) that would be suspect, or even defeated.

      In any case, I don't think genetic debunking arguments targeting moral realism (I strongly disagree with the terminology :-)), or generally those targeting a priori beliefs have any such problems.

      I have to go now, but later I'll try to post one or more examples involving a priori beliefs.

      Side note: Briefly, and with regard to the assessment that pain is bad, my take on the matter after further thought is that pain is usually bad, but sometimes it's not, or at least possibly, some pain is not bad. It depends on whose pain it is. I know my position is unorthodox, but I would question the idea that "pain is bad" is self-evident. I'll defend my position on pain later if you like me to, though I'm not sure how pertinent to the matter at hand it would be.

      Delete
    7. After further consideration, it turns out the strongest examples seem to involve morality, so you're likely to reject the examples I might provide. I'll keep trying to find examples that might be more appealing to you, but for now, I would like to ask why you think that genetic debunking arguments fail when they target a priori beliefs.

      As for my suspicion about genetic debunking arguments that risk (or more precisely, seem to lead to) skepticism about/defeat of epistemic rationality, that suspicion is just the application to genetic debunking arguments of a more general suspicion that any arguments (whether genetic debunking ones or not) that lead to skepticism about/defeat of epistemic rationality might be self-undermining. The reason is that such arguments appear to raise trouble for the epistemic justification of the belief that the arguments succeed in the first place.
      But that concern does not single out genetic debunking arguments, and it seems at least prima facie plausible.

      On the other hand, I haven't been able to find any good reason to suspect that genetic debunking arguments would fail in the case of a priori beliefs. Why would the fact that the belief is a priori play a role?
      After all, the following is true:

      1. Probabilistic assessments apply to a priori beliefs just as they apply to a posteriori beliefs.
      2. Genetic debunking arguments sometimes could succeed (case in point: the color example), so it's not the case that they have some fatal flaw inherent to their being genetic debunking arguments.
      3. Genetic debunking arguments against your version of moral realism aren't all self-defeating. For example, the modified argument I gave does not make any claims that would allow the debunking to be applied to the argument itself. In fact, the structure seems relevantly similar to the argument in the color case (it's not analogous with regard to the degree of skepticism or the sort of belief it would warrant; e.g., we shouldn't conclude that torturing babies for fun is morally good, according to the argument; but that's neither here nor there).

      So, why would a priori beliefs be invulnerable to that sort of genetic debunking arguments? (by the way, false sets of moral beliefs can be coherent if we assume no analytic reductivism (and you reject such reductivism, if I'm not mistaken), unlike bad mathematical reasoning).

      Delete
    8. My view is that genetic debunking per se doesn't work. But in the colour case, genetic debunking happens to coincide with something else -- undercutting one's evidential basis -- which does work. This doesn't extend to a priori beliefs because the evidential basis for a priori beliefs is nothing to do with us (in contrast to, say, sensory perception), so evidence about us and our faculties is irrelevant to what a priori beliefs are objectively justified.

      Delete
    9. By the way, it's an interesting suggestion that one can avoid self-undermining by instead adopting a restricted version of Genetic Debunking which builds in that it does not apply to logical or epistemic beliefs (say). But such a restriction seems awfully ad hoc. If we agree that Genetic Debunking is a false principle when it comes to epistemic normative beliefs (such as your beliefs about when debunking is legitimate), it seems most coherent to regard it as a generally false principle, and give a different explanation (as I do) of what's going wrong in cases where our perceptual faculties or the like are undermined.

      Delete
    10. There might be a misunderstanding here. I see I've not been clear. By (non-capitalized) "genetic debunking", I don't mean what you call "Genetic Debunking". Rather, I'm talking about the general procedure of debunking beliefs on the basis of their origin. As the color case exemplifies, that procedure works, if the premises are probable enough. I can give many more examples (see below). What seems ad-hoc to me is to exclude a-priori beliefs from genetic debunking.

      Also, I'm not suggesting making an ad-hoc exclusion from genetic debunking arguments to remove cases in which epistemic rationality is called into question. Rather, my suspicion is that any arguments - whether based on genetic debunking or not - that call into question epistemic rationality might be in trouble (i.e., they might fail for that reason). So, I'm suggesting an application to genetic debunking of a general idea, not an ad-hoc exception. Still, as I said, I'm not convinced that arguments that question epistemic rationality are generally in trouble. I suspect they are, but it's a tentative position. If it turns out such arguments aren't generally in trouble, then I wouldn't make an exception for genetic debunking one and say that those are in trouble, either.

      Let's say that I learn (i.e., I properly assign probability almost 1 to the hypothesis that) a demon rolls a trillion-sided fair die which will instill in me a false belief about whether there is life on Enceladus if the die lands any number but one, or landed on an edge. Moreover, in that case (i.e., any case except for 1), the demon might or might not choose to also give false memories in support of that belief, and remove conflicting ones, except the demon will not alter any memories involving his interaction with me. But the demon will not affect at all my ability to do logic or Bayesian updates, or any other memories or faculties.

      It turns out that a second after learning that and before considering the matter again, I happen to have a belief that there is life on Enceladus. In fact, when I think about the matter (a few seconds after the encounter with the demon), I distinctly remember that a probe was sent to Enceladus, took samples from the jets coming from the surface, and discovered microscopic life; I also remember believing distinctly that there is indeed life in Enceladus, on the basis of that.
      Before I go on line to check the matter, should I believe, or continue to believe, that there is life on Enceladus?
      The answer is "clearly not".
      This sort of genetic debunking seems to work generally - not ad-hoc, but it's the usual case -; my concern in cases in which epistemic rationality is compromised is the particular application of a general concern, as I mentioned.

      Delete
  8. I've been trying to find good examples of hypothetical genetic debunking of a priori beliefs not involving moral beliefs. Here's a first example:

    E1. Alice is excellent at logic and Bayesian updating. She knows that.
    Also, Alice properly gives probability as close to 1 as you want to the hypothesis that a demon (say, Azazel), just rolled a (((10^1000000000000000000000000000000000!!!!!!!!!!!!!!!!!!!!!!!!!!!!)^1000000000000000000000000000!!!!!!!!!!!!!!!!!!!!!!!!)^100000000000000000000000!!!!!!!!!!!!!!!!!!!)-sided fair dice, and:

    i. Azazel did nothing if the die landed 1; otherwise:

    ii. The following happened:

    A. Azazel modified the [semantic, metaphysic, or whatever those intuitions happen to be] intuitions of Alice as needed, so that when contemplating - under the assumption that water is H2O on Earth - the case of XYZ in the Twin Earth scenario, Alice's intuitions will yield the wrong verdict. Additionally, Azazeal gave Alice a false belief about whether XYZ on Twin Earth is water.
    B. Azazel removed memories or previous beliefs if needed, in order to modify any previous assessments that conflicted with Alice's new intuition and belief, with the exception that Azazel did not remove or alter any memories of Alice's interactions with Azazel and her probabilistic assessment.
    C. Azazel never added any fake memories. He only - if needed - removed memories.
    D. Azazel never added any beliefs other than the one about water/XYZ, etc., or those related to it. In short, no things like ghosts or God or teapots, or any false moral beliefs, etc.
    E. Azazel did not affect Alice's abilities to do logic and Bayesian updating.

    Alice contemplates the Twin Earth scenario, and her intuition - and belief - is that XYZ is water. Before she talks to anyone else, is she epistemically rational in keeping her belief that XYZ is water, even after thinking about Azazel's potential involvement? Or [epistemically] should she change her belief, on account of the potential event that Azazel's die did not land 1?

    I claim that Alice [epistemically] should no longer believe that XYZ on Twin Earth is water. Alice would be epistemically irrational if she were to continue to believe that XYZ on Twin Earth is water.
    Now, assessments as to whether XYZ on Twin Earth is water under the assumption that water is H2O on Earth are a priori.

    So, at least if the genetic assumptions hold (in this case, her assignment of a probability as close to 1 as you want to the hypothesis in question), genetic debunking arguments can be successful against some beliefs held on a priori considerations.

    Granted, you might question M1, or M1', etc., or argue that even if probable, they're not probable enough (surely, they're not as probable as the proposition about Azazel in the scenario), and that would lead to another kind of discussion. But the genetic debunking argument would no longer be rejected because it's a genetic debunking argument.

    Alternatively, you might reject my claim that Alice [epistemically] should no longer believe that XYZ on Twin Earth is water, or instead come up with a different objection.

    In any case, I look forward to your assessment about E1.

    P.S.: I just read your paper. Very interesting, as usual, and I agree with some of your points, but - as you sure reckoned -, we don't agree on all of the points. :-)

    ReplyDelete
    Replies
    1. I'm inclined towards a kind of conventionalism about semantic facts, so I don't think it's an objective a priori truth that XYZ is not water. If Alice is inclined to apply 'water' to XYZ (while knowing that H2O is the watery stuff in our world) then she just has a subtly different concept from those of us who treat it as a rigid designator. So I'm not convinced it really makes sense to imagine an evil demon giving her false semantic beliefs. (Unless we're talking about community meaning, which is obviously not a priori.)

      But more generally, I think there are interesting questions about "non-ideal rationality" when we imagine (e.g. math) cases where there is a sort of higher-order evidence that we're overwhelmingly likely to be wrong, in conflict with the first order evidence of the self-evident (but perhaps highly non-obvious) a priori proposition itself. There does seem a sense in which we want to say it's reasonable for the person to doubt themselves, even if the ideally rational answer is in fact the one that they've reached. So there's more to be said here, for sure. (You might be interested in previous discussions surrounding normative risk -- e.g. Carl's Doomsday Device -- which raise related issues.)

      Delete
    2. Thanks for the links. I can't read them now, but I'll try to read them later.

      That aside, the distinction I was addressing (which I took to be the one you were making, based on my reading of your previous post) was between a priori and non a priori.
      The introduction of the "objective" qualifier is problematic for me, since I don't agree with the orthodox claim that that means "mind-independent", and I don't find the mind-dependent/mind-independent distinction (at least, as usually construed) to be important in general. But I know that that's unorthodox and would take too long to argue in a thread, so instead, I will try to address what I tentatively think you would consider an objective a priori belief.

      So, the belief is about UCR (side note: I don't agree with the name "uncompromising", since it suggest my color realism is "compromising"; the same goes for UNR, etc.; I'm using that terminology just for the sake of the argument, because it matches/mirrors Street's).

      For the reasons I mentioned earlier (among others), I don't think the questions (i) and (ii) you asked earlier allow us to distinguish between UCR and UNR in terms of justification, or to assess whether UCR is justified. However, I will assume for the sake of the argument that (ii) work as you intend them to (at least, if I get your intention correctly), and based on that give an example of genetic debunking.

      So, here's the example:

      E2. Alice is excellent at logic and Bayesian updating. She knows that.
      Also, Alice properly gives probability as close to 1 as you want to the hypothesis that the demon Azazel just rolled a (((10^1000000000000000000000000000000000!!!!!!!!!!!!!!!!!!!!!!!!!!!!)^1000000000000000000000000000!!!!!!!!!!!!!!!!!!!!!!!!)^100000000000000000000000!!!!!!!!!!!!!!!!!!!)-sided fair dice, and:

      i. Azazel did nothing if the die landed 1; otherwise:

      ii. The following happened:

      A. Azazel modified the [semantic, metaphysic, or whatever those intuitions happen to be] intuitions of Alice as needed, so that when contemplating the question "(ii) Could an object be red even though it seemed green to everyone (even in ideal viewing conditions)?", she will get the wrong answer. Additionally, Azazeal gave Alice a false belief about whether UCR is true.
      B. Azazel removed memories or previous beliefs if needed, in order to modify any previous assessments that conflicted with Alice's new intuition and belief, with the exception that Azazel did not remove or alter any memories of Alice's interactions with Azazel and her probabilistic assessment.
      C. Azazel never added any fake memories. He only - if needed - removed memories.
      D. Azazel never added any beliefs other than the one about UCR, etc., or those related to it. In short, no things like ghosts or God or teapots, or any false moral beliefs, etc.
      E. Azazel did not affect Alice's abilities to do logic and Bayesian updating.

      Alice contemplates the scenario in which an object seems green to everyone under the conditions of question (ii), and her intuition - and belief - is that the answer to (ii) is "obviously yes". She also believes that UCR is true. Before she talks to anyone else (or checks on line, etc.), is she epistemically rational in keeping her belief that UCR is true, even after thinking about Azazel's potential involvement? Or [epistemically] should she abandon her belief, on account of the potential event that Azazel's die did not land 1?

      I claim that Alice would be epistemically irrational if she were to continue to believe that UCR is true, but please let me know if you don't think whether UCR is true is an objective a priori belief, or if you disagree with my assessment about Alice's epistemic rationality.

      Delete
    3. I think Alice's belief is unjustified, but for reasons independently of Azazel's potential involvement, as I take her view here to be a priori false. On the other hand, if we imagine Beth who has the opposite (and, I take it, correct) intuitions about the case, then I think she'd be rational in holding to her belief that UCR is false, again regardless of Azazel's potential involvement. Indeed, she could reason from the soundness of her intuitions to the conclusion that Azazel's die -- however miraculously -- landed 1.

      Delete
    4. (But I agree this is a pretty wild-seeming commitment I've taken on, and that your example here does advance an intuitively strong case for the possibility of genetic debunking of a priori beliefs.)

      Delete
    5. I claim that Alice would be epistemically irrational if she were to continue to believe that UCR is true

      I've followed with interest, but I'm still puzzled. Epistemic irrationality suggests a strong modal operator (an epistemic ought-not); the scenario that's depicted, however, has nothing in its description that would yield a strong modality in the conclusion, so it seems there still has to be some implicit epistemic assumption being made that's doing all of the work. And surely what's necessary with respect to determining whether one can get around the self-defeat of genetic debunking is to know exactly what that epistemic assumption rather than hiding it in intuitions.

      Delete
    6. The objective of my scenarios is to trigger clear intuitive epistemic assessments. I don't claim to know what principles underlie them.
      In fact, in this particular scenario, Richard also agrees that Alice's belief is unjustified (i.e., she's being epistemically irrational), even if he disagrees with me on other grounds.

      Do you believe that her belief (in E2) is justified?
      If so, I guess my scenario failed to get the same intuitive assessment from you. Maybe the math example I'm about to post will.

      Delete
    7. (1) As we've already seen quite clearly even in this thread, even having the same verdict with regard to a scenario is very different from having the same intuitive assessment with regard to it. But even if the latter occurs, without knowing the relevant principle, it's unclear how to determine whether the examples in question actually do what they are supposed to do, namely, establish that something like Street's lottery objection, suitably modified, can still succeed and avoids the problem of self-defeat.

      The more specific issue related to this is that I still don't see how you are getting beyond just saying (if we go back to the lucky ticket metaphor) 'The lucky ticket would definitely have to be lucky'. We can attenuate the probabilities and complicate cognitive processes all we please, but in order to get a claim that we should change our beliefs, or about epistemic rationality or irrationality, it seems we have to have some plausible translation principle to get us from a mere description of probabilities (which is already granted for the sake of argument here) to a kind of modality (which is needed to turn the description into an actual challenge) that is so strong that it is analogous to impossibility. That just seems to me to be a very big step to concede without knowing how it is being made.

      (2) I am not the right person to go to for intuitive assessments in these cases; I am skeptical of intuitive assessment in any case like that of Alice -- I don't think anybody actually has reliable intuitive assessments of such extreme kinds of scenarios. If I did just go with my own first reaction to the scenario, it seems to me that a case of such extensive mental tampering is one in which (epistemic) rationality and irrationality simply cease to be well defined.

      (3) But in fairness I should also note that I have a tendency to think of epistemic rationality and irrationality as being in the first place something attaching to inquiry, and only derivatively attaching to beliefs in the context of inquiry, so my intuitions are laboring under an additional disadvantage in trying to see your point that most other people wouldn't have: talking about what Alice should do about her beliefs independently of considerations of prior and further inquiry is not something I would normally consider legitimate, so in trying to think through the case, I have to grant for the sake of argument suppositions I would not normally grant.

      Delete
    8. (1)I'm not sure what you mean by "As we've already seen quite clearly even in this thread, even having the same verdict with regard to a scenario is very different from having the same intuitive assessment with regard to it". Do you have an example of having the same intuitive assessment but not verdict, in this thread?
      In any case, usually, one does not know general principles on which one bases probabilistic assessments and updates.
      As I see it, the argument I gave is good enough. Richard disagrees, and then we had a discussion on whether debunking arguments ever work, ever work against a priori beliefs, a priori true beliefs, etc.

      As for self-defeat, I haven't seen any arguments indicating that in the case of my argument against some forms of realism, genetic debunking is self-defeating (the argument is about some forms of moral realism, not about normative realism in general in a broad sense of "normative). Also, genetic debunking arguments aren't as a rule self-defeating, so without any particular reason to think this one might be so, and given that it's intuitively clear to me that it works, I don't see the force of the objection.

      If you think there is a way of applying that argument to itself, I'd like to ask what it is.

      I don't understand your claim that there is a big step from probability to modality. In this context, an argument about the probability of X is an argument about what probability one should assign to X (or Alice should assign, or someone should assign). Assigning a wrong probability is an instance of epistemic irrationality. I see no jump.

      (2) I readily grant that avoiding epistemic skepticism in scenarios involving tampering with the brain is a difficulty. That was not a feature of the original debunking arguments. Richard introduced an example that contained that feature (one about the greenness of grass), and the discussion continued with similar scenarios, in which I was trying to first convince Richard that at least one genetic debunking argument work, then that at least one genetic debunking argument against a priori beliefs worked, etc. As I mentioned in my replies to Richard, I've been trying to limit the risk of general epistemic skepticism; I'm not sure how successful I've been at that, but the problem here is not the genetic debunking, but features like tampering with memories and already formed beliefs.

      (3) I'm not entirely sure I follow what the difficulty is in your assessment. Perhaps, if you give me some more info about what you think of the different scenarios and types of arguments, I can understand your objection better.

      For example:

      a. Do you think that genetic debunking arguments always fail?
      b. Do you think all probabilistic arguments need to state some general principle? If not, which ones?
      c. Do you think the alien orcas, alien elephants, etc., would all have the wrong moral system (as opposed to moral-like systems, with no error)?
      d. Do you think they could [epistemically] rationally reject the genetic debunking argument too?

      Delete
    9. Do you have an example of having the same intuitive assessment but not verdict, in this thread?

      The reverse: same verdict, different intuitive assessments, in the greenness of grass scenario: while both you and Richard were coming to the same conclusion about it, it took several comments to make clear that the reasons for coming the same conclusion were not, in fact, the same intuitive assessment but an accidental overlap in verdict arising from different assessments.

      In any case, usually, one does not know general principles on which one bases probabilistic assessments and updates.

      I don't see how this is relevant; we're not talking about everyday probability assessment, but specifically about the cogency of arguments concerned with the rationality and irrationality of beliefs.

      Assigning a wrong probability is an instance of epistemic irrationality.

      Well, this would certainly be a translation principle for getting from probabilities to the relevant modalities, but I'm not sure I understand it. As far as I can see, this is obviously false even in very ordinary cases. If the assigned probability of a die landing on 1 is exactly 1/6, but as it happens the die is slightly imperfect and the right probability is 0.173, there's no way to tell from the probabilities themselves whether that is irrational; the probabilities on their own do not even tell us what grain of precision is required, much less allow us to distinguish (as would be necessary) the irrational from the rational-but-slightly-less-so. If we require flawless precision on the real number line to count things as irrational, the distinction between rational and irrational becomes completely irrelevant to anything, since it is practically impossible for anyone to be this precisely accurate, and so you might as well just say we are always irrational about everything and call it a day; if we are talking about intervals, something must set the size of the intervals. It also seems false to suggest that epistemic rationality and irrationality don't concern how difficult the probability assignment is (what does one have to do to get a correct assignment at all). Etc., etc. So there's a lot in this translation from probabilities to modalities that I am finding unclear.

      On your questions:

      (a) One cannot know whether genetic debunking arguments always fail without knowing what principles they use. But surely one of the things being discussed here is whether all such principles exhibit self-defeat (or perhaps, that they all are either implausible or exhibit self-defeat), or whether some escape it. I don't see how one can do this without looking at the principles themselves.

      (b) I'm not sure what you mean here. The issue with modality is a logical one: you keep making strong modal claims. Strong modal claims can't pop out of nowhere. There must be implicit assumptions. The genetic debunking principle in Street's argument is itself a translation principle to get from causes and expectations to a strong modality, so whether a given argument can reasonably be considered to avoid the problem requires (e.g.) that it be able to get strong modalities in a non-self-defeating way.

      (c) I think this is an erroneous way of assessing moral systems. What about a moral system that is right but only under certain conditions, or one that is right but only to a certain degree of approximation, or one that is right in every moral domain except a small one? Why is error-free rather than mostly error-free the standard? But this seems analogous to the wrong-probabilities/irrationality issue: if we're being this finicky, it's like saying that all physics is irrational because all measurements, and thus all physical conclusions, have a margin of error.

      (d) As I noted before, I don't think it is possible to assess epistemic rationality of belief except in light of inquiry -- what kinds of inquiry it presupposes, what kinds of inquiry it makes available, and so forth.

      Delete
    10. The reverse: same verdict, different intuitive assessments, in the greenness of grass scenario
      Fair enough, but then I don't see the problem, given that I aim at triggering the same intuition. A difficulty would be same intuitions, different verdict. It might still happen.

      I don't see how this is relevant; we're not talking about everyday probability assessment, but specifically about the cogency of arguments concerned with the rationality and irrationality of beliefs.
      I'm not talking about everyday probability arguments only, but philosophical arguments as well. I'm not sure why genetic debunking arguments of some specific beliefs would be an exception.

      Well, this would certainly be a translation principle for getting from probabilities to the relevant modalities, but I'm not sure I understand it. As far as I can see, this is obviously false even in very ordinary cases. If the assigned probability of a die landing on 1 is exactly 1/6, but as it happens the die is slightly imperfect and the right probability is 0.173, there's no way to tell from the probabilities themselves whether that is irrational; the probabilities on their own do not even tell us what grain of precision is required, much less allow us to distinguish (as would be necessary) the irrational from the rational-but-slightly-less-so.
      I don't understand your objection. That assigning the wrong probability is an instance of epistemic irrationality (maybe the only kind, but I want to leave the door open for other options) seems to be a conceptual truth. Whether assigning 1/6 is correct, or 0.173, depends on a person's information about the die. You seem to misunderstand my probabilistic arguments (and Street's) because you're apparently using "probable" in some non-epistemic sense. The arguments are about epistemic probability.

      (a) One cannot know whether genetic debunking arguments always fail without knowing what principles they use. But surely one of the things being discussed here is whether all such principles exhibit self-defeat (or perhaps, that they all are either implausible or exhibit self-defeat), or whether some escape it. I don't see how one can do this without looking at the principles themselves.
      That seems to place an undue burden on the debunker. In all debunking argument (genetic or not), someone might say "but maybe they're self-defeating, so you need to show the principles to see that they're not", and that would take out, it seems, most philosophical debunking arguments. The origin seems not to be relevant.

      (b) I'm not sure what you mean here. The issue with modality is a logical one: you keep making strong modal claims.
      So you say. I'm not sure where you get that from, but I suspect it might be because you've misunderstood what is meant by "probable" in this context (see above).

      Why is error-free rather than mostly error-free the standard?
      Because we were apparently talking about non-fixable errors earlier. But fair enough, let me ask you then: do you think the alien orcas have a system with non-fixable errors whenever after reflection they get a result other than ours?

      (d) As I noted before, I don't think it is possible to assess epistemic rationality of belief except in light of inquiry -- what kinds of inquiry it presupposes, what kinds of inquiry it makes available, and so forth.
      So, you don't think it's possible to assess whether the alien orcas would be epistemically rational?

      Delete
    11. I don't see the problem, given that I aim at triggering the same intuition.

      But how do you know that it's the same? The verdict doesn't tell you. That requires looking at principles.

      That assigning the wrong probability is an instance of epistemic irrationality...seems to be a conceptual truth.

      Suppose 'epistemic irrationality' does just mean 'assigning the wrong probability'. What kind of wrongness? Does it have to be perfect real-number precision? Then it tells us nothing important. Is it to a looser level of precision? Then omething else sets the level.

      Epistemic probabilities alone don't yield strong epistemic modalities; this arises from the fact that they aren't such modalities themselves. Trying to get from one to the other without an adequate translation principle is exactly like saying (when dealing with ontological probabilities) that something doesn't exist just because it's unlikely. In addition: (1) Epistemic probability is a sliding scale, whereas 'epistemically irrational' is a nominal measure, classifying things into groups. You cannot get directly from a sliding scale measure to a nominal measure without some threshold on the sliding scale for the cut off between the groups, which cannot come from the sliding scale alone. It would be like saying that you can look at thermometer and, without any inference or additional knowledge, see that a material is overheating. (2) Epistemic probabilities only imply weak modalities, not strong ones. So how do you use divergences of probabilities to get a strong modality? Usually-better-nots, maybe; but how do you get an ought-not? (3) That probabilities diverge from modalities is a standard for other cases; e.g,, the probability of picking any real number at random is zero, but each one is still a possibility. Why would this not be true here?

      In all debunking argument (genetic or not), someone might say "but maybe they're self-defeating, so you need to show the principles to see that they're not", and that would take out, it seems, most philosophical debunking arguments.

      But it's the lowest bar for any kind of debunking: make sure your debunking doesn't debunk itself. If they can't meet it, they need to be swept away.

      On alien orcas, I'm not an expert on them, so I wouldn't know. I don't have what is required to tell if my intuitive assessments are likely to be reliable.

      But I don't think you need to accommodate my not-very-common views on the inquiry-relativity of belief assessment. I only raised it to note that in trying to think through these 'intuitive' cases I am laboring under special difficulties that most people don't, so can easily miss something.

      Delete
    12. Brandon,


      But how do you know that it's the same? The verdict doesn't tell you. That requires looking at principles.

      I aim at triggering the intuitive assessment that the genetic debunking argument in the hypothetical scenario that I present, is successful. If I trigger that assessment, the example achieved its goal. On the other hand, if the reply is that the person is being epistemically irrational or whatever but the genetic debunking argument has not succeeded, the example did not achieve its goal.

      So, the intuition that I intend to trigger (which would be the same intuition) is as follows: in the hypothetical scenario in question, the genetic debunking argument (rather than some other stuff) succeeds.

      On that note, here's an example:

      How about the following scenario (which I do not believe would obtain, by the way, but it's a hypothetical scenario to test a sort of genetic debunking).

      In the future, we have an AI that has so far been very helpful (as intended), has made huge progress in medicine, math, physics, and pretty much everything. It's also made huge progress in understanding the psychology of humans and many non-human animals, successfully predicting behavior (of humans and non-humans) far more frequently than it was ever possible before, and moreover, assigning probabilities that seem to match those frequencies. Similarly, it has made a lot of progress in economics (much better models), and also philosophy, finding hidden logical errors in several philosophical views, etc.

      Now, after studying working human brains, doing experiments with many volunteers, etc., the AI says that there is no single or majority human moral system, and that even under ideal reflection, there will be fundamental moral disagreements all over the place. In other words, the AI says that the disagreements will not be rare, but even after reflection, and in many of the subjects that people actually debate in ethics, there will be huge variance in the assessments of different humans. In fact, the AI says that the disagreement will persist in most cases, and it will be deep (i.e., not just whether something is slightly less or more immoral).

      Let's say the AI says that it can find no other means of ascertaining moral truth. It reckons that probably, either a substantive moral error theory is true, or the human moral system is inadequate as a guide to moral truth, at least in many common cases.

      Would you not say that in such scenario, people [epistemically] should reduce the probability they assign to at least many of their moral stances, and indeed increase the probability they assign to a moral error theory? (assuming they're not error theorists already).

      Suppose 'epistemic irrationality' does just mean 'assigning the wrong probability'. What kind of wrongness? Does it have to be perfect real-number precision? Then it tells us nothing important. Is it to a looser level of precision? Then something else sets the level.
      I don't think it has to be so precise, no. For example, it may be that one [epistemically] should say it's probable, but says it's improbable, etc.

      I don't understand the rest of your objection here, but whatever the case, I don't see how you would go from that to rejecting an intuition-based genetic debunking argument.

      I'll address the rest of your points below.

      Delete

    13. Epistemic probabilities alone don't yield strong epistemic modalities; this arises from the fact that they aren't such modalities themselves. Trying to get from one to the other without an adequate translation principle is exactly like saying (when dealing with ontological probabilities) that something doesn't exist just because it's unlikely. In addition: (1) Epistemic probability is a sliding scale, whereas 'epistemically irrational' is a nominal measure, classifying things into groups. You cannot get directly from a sliding scale measure to a nominal measure without some threshold on the sliding scale for the cut off between the groups, which cannot come from the sliding scale alone. It would be like saying that you can look at thermometer and, without any inference or additional knowledge, see that a material is overheating. (2) Epistemic probabilities only imply weak modalities, not strong ones. So how do you use divergences of probabilities to get a strong modality? Usually-better-nots, maybe; but how do you get an ought-not? (3) That probabilities diverge from modalities is a standard for other cases; eg,, the probability of picking any real number at random is zero, but each one is still a possibility. Why would this not be true here?


      (1) I don't see the problem with the scale. There are different degrees of epistemic irrationality too. For example, let's say that Alice should assign low probability to X, and so should Bob, but Alice holds that X is more probable than not, though not extremely probable, whereas Bob holds that it's extremely probable. Then, Bob is being epistemically irrational on the matter to a higher degree than Alice.

      Is there a specific cut off point, between epistemically irrational and epistemically not irrational, that can be established with an arbitrary degree of accuracy?
      I'm inclined to say probably not, but either way, what seems clear to me that whatever the answer is, the same answer would hold in the scale of epistemic probabilistic assessments (which does not entail precise assignments on all cases, just in the threshold if there is a precise threshold).

      I don't see how this is a problem for a classification in epistemically rational vs. irrational. One may classify things in fish vs. not-fish, regardless of whether the category "fish" is fuzzy (which I think it is, like nearly all other categories we use). We can still properly say that a lion is not a fish, but a shark is.

      (2) I'm not sure what you mean by "strong epistemic modalities", but if Alice [epistemically] should assign low probability to X, but she assigns high probability, she's being epistemically irrational - that seems to be a conceptual truth.

      (3) Whther the probability of picking a random real is zero depends on your distribution. But I don't see what you mean by "diverge from modalities".

      That aside, I'm afraid that I'm at this point as puzzled by your arguments as you said you were puzzled by mine. Perhaps, our respective ways of looking at the matter are just so different that we end up talking past each other.


      But I don't think you need to accommodate my not-very-common views on the inquiry-relativity of belief assessment. I only raised it to note that in trying to think through these 'intuitive' cases I am laboring under special difficulties that most people don't, so can easily miss something.

      Fair enough, and I admit I'm very probably missing a number of your points, because I don't know enough about your views.
      My views are also not very common on a number of topics, so I try to look at the matter from the perspective of other people's views in order to communicate successfully, but I'm afraid I don't seem to know enough about your views in order to do that.

      Delete
    14. Just to prevent a potential objection, let's further stipulate that in (1), Bob and Alice have the same amount of info relevant to ascertaining whether X obtains.

      Delete
    15. I think distinct arguments (one about intuition and the other about irrationality) are getting mixed together, to the confusion of both, so I'll put them in distinct comments.

      (I) Intuitive Assessment in Debunking Arguments

      So my basic objection to what you call "intuition-based genetic debunking arguments" is quite basic: there is no such thing. There are only genetic debunking arguments in which we know the principles and genetic debunking arguments in which we don't. In both cases it is the principles that do all the work; in neither case is the argument itself based on intuitive assessment. In the case when we don't know the principles of the debunking, the only value whatsoever to any intuitive assessment is if it helps us to get closer to seeing what principles might be used in the argument. And only analysis of the principles can actually tell us if the debunking argument avoids the kinds of problems in view here -- like self-defeat, which is entirely a property of the principles used to get the argument to its conclusion.

      The problem is summed up actually quite well in your statement of what you are trying to do:

      I aim at triggering the intuitive assessment that the genetic debunking argument in the hypothetical scenario that I present, is successful. If I trigger that assessment, the example achieved its goal.

      In other words, you are not doing anything that can possibly establish whether the genetic debunking argument is actually successful; you are merely seeing whether you can state a genetic debunking argument in such a way that it looks successful to someone, even if it's not. Intuitive assessments can be reliable, but they are not always reliable, and there are plenty of situations where they are not reliable. So that an argument seems successful tells us nothing about where it is. Arguments are successful or not entirely on the basis objective factors like logical structure and truth of premises. None of these things depend on the kinds of intuitive assessments you are cultivating. And intuitive assessments can't even be taken as signs of these things unless we have independent reason to think they are reliable.

      On the AI example. How does one actually assess such a case without merely guessing? In reality none of the science fiction dressing does anything but try to make the probabilities more extreme; and they don't tell us anything on their own. It's clear from the whole structure of argument that what's at question is a principle of authority:

      If an expert in a lot of fields relevant to moral philosophy advocates a moral error theory, we should always revise the probability of moral error theory upward and the probability of some of our moral claims downward.

      If this is false, then either (1) it is false because of the 'always' or (2) it is simply false. If it is simply false, we have no reason to think we should respond to the AI the way we suggest -- it's just a super-expert. If it is false because of the 'always', then we need more information to make sure that this kind of scenario doesn't happen to be one of the exceptions. If it's true, then we should indeed do what you suggest.

      But the question, of course, that needs to be addressed, and that can only be addressed by looking at the underlying principles is the same: Why can't someone, recognizing that his being right and the expert being wrong is unlikely, nonetheless, on the basis of content rather than probability, reasonably hold that this is, in fact, one of the unlikely cases? Admitting that you are very lucky, why does the mere fact that you would have to be very lucky pose a challenge to the claim that you actually happen to have been very lucky?

      Delete
    16. II. Epistemic Irrationality

      The problem with the (distinct) issue of epistemic irrationality is that you seem to want it to do different things, and it is not at all clear to me how you get it to do those different things, apparently because you think it's just obvious that it can. You say:

      I'm not sure what you mean by "strong epistemic modalities", but if Alice [epistemically] should assign low probability to X, but she assigns high probability, she's being epistemically irrational - that seems to be a conceptual truth.

      So, first on strong epistemic modalities. Strong modalities are modalities represented in modal logic by a Box operator (or the corresponding Box-Not): Must, ought, should, ought not, etc. Strong epistemic modalities, of course, are just Box and Box-Not in an epistemic context. You repeatedly characterize 'epistemic irrationality' in terms that require strong modality.

      But you also characterize the irrationality as merely a fact about probabilities. The problem is that probabilities are not strong modalities of any kind. When they are related to modalities, they are intermediate between strong and weak modalities. Box will imply a probability of 1, if we're using a reasonable system; Box-Not will imply a probability of 0. The reverse inferences are not valid: things of probability 1 need not be Box and things of probability 0 need be Box-Not. We see this with necessity and possibility. Not everything with a probability of 1 is necessary, and not everything with a probability of 0 is impossible. (You are right, of course, that I dropped the necessary qualification about distribution; but it is nonetheless the case that 0 probability is not equivalent to impossible.)

      You cannot get strong modalities except from strong modalities, or by some additional principles that allow you to translate other claims into claims that have them. So if probabilities are not strong modalities, you cannot move from one to the other without an additional principle.

      We see the problem with your 'conceptual truth', which does not seem to be any such thing at all. If we take a case in which someone assigns a 'low probability' to a claim that is 'high probability', how do we get such a strong claim out of it? High and low can't help any. Given how probabilities are ordered, you can turn any difference of probability into a high-low distinction, just by putting our line of reference between them. High and low are also not intrinsic features of probabilities. 0.000001 is low compared to 0.999999. It is high compared to 0.00000000000001. So to put any emphasis would require bringing in an additional principle about what we consider high and low -- which means the difference of probabilities alone doesn't tell us anything about epistemic irrationality.

      You also have characterized epistemic irrationality as being on a sliding scale. But you also say there's probably no clear cut-off point, which means that the sliding scale goes increasingly in an epistemically irrational direction one way and in an increasingly epistemically rational direction the other way. This makes it reflect the probabilities, but directly implies that your constant use of 'epistemically irrational' is arbitrary: since the scale goes both ways, everywhere you say 'epistemically irrational' we could just as easily say 'epistemically rational'. But if that's the case, how are you getting the 'should not' out of it -- every assessment short of the extremes is to some extent irrational but also to some extent rational, so we don't seem to get any 'should' or 'should not'. There's no black and white, only darker and lighter shades of gray. If, on the other hand, that is not how the scale is supposed to work, how are we supposed to know that purely on the basis of the difference in the probabilities, as we would if it were a conceptual truth?

      Delete
    17. Brandon, probability is epistemic probability in this context. If your boxes are not modeling epistemic probability properly, then the problem is for the model.
      Let me put it in a different way: let's say that there are events that have probability 0 but are not impossible, and events that are impossible but have probability more than zero.
      How would you derive a contradiction from that and anything I said (not your models)?

      Also, whether a probability counts as "high" or "low" depends on context, but that's not the point.
      For example, X is the hypothesis that two balanced dice in a casino will land 5 on the same roll. They epistemically should assign about 1/36 (well, a bit lower, since a die might land on a side). There is no further info about the dice. Alice assigns a bit over 0.5 (not necessarily a specific number), and Bob assigns over 35/36 (but not necessarily a specific number; I'm using the numbers only because you object to "low" and "high" for some reason I don't understand). Then Bob is being more epistemically irrational than Alice.


      So to put any emphasis would require bringing in an additional principle about what we consider high and low -- which means the difference of probabilities alone doesn't tell us anything about epistemic irrationality.

      No, we don't. To say that the probability is low, in the context of epistemic probability, is conceptually equivalent to saying that the agent we're talking about (real or not, human or not) epistemically should assign a low probability, etc.


      You also have characterized epistemic irrationality as being on a sliding scale. But you also say there's probably no clear cut-off point, which means that the sliding scale goes increasingly in an epistemically irrational direction one way and in an increasingly epistemically rational direction the other way. This makes it reflect the probabilities, but directly implies that your constant use of 'epistemically irrational' is arbitrary: since the scale goes both ways, everywhere you say 'epistemically irrational' we could just as easily say 'epistemically rational'.

      I'm afraid at this point I just don't understand your objection. Could you please derive a contradiction from anything I said?

      Delete

    18. So my basic objection to what you call "intuition-based genetic debunking arguments" is quite basic: there is no such thing. There are only genetic debunking arguments in which we know the principles and genetic debunking arguments in which we don't. In both cases it is the principles that do all the work; in neither case is the argument itself based on intuitive assessment.

      Actually, we can only know principles by intuition, and we test proposed principles by intuitively assessing hypothetical scenarios.


      In other words, you are not doing anything that can possibly establish whether the genetic debunking argument is actually successful; you are merely seeing whether you can state a genetic debunking argument in such a way that it looks successful to someone, even if it's not.

      No, that's not at all "in other words", that's your claim about what I'm doing. I - strongly - disagree with your assessment about the usefulness of arguments based on intuition. In fact, in nearly all cases, we make epistemic probabilistic assessments. Even when it comes to, say, establishing beyond a reasonable doubt that someone is guilty, we do it on intuitive probabilistic assessments.


      Arguments are successful or not entirely on the basis objective factors like logical structure and truth of premises. None of these things depend on the kinds of intuitive assessments you are cultivating. And intuitive assessments can't even be taken as signs of these things unless we have independent reason to think they are reliable.

      You can only establish the truth of the premises - like proposed principles - intuitively!


      On the AI example. How does one actually assess such a case without merely guessing? In reality none of the science fiction dressing does anything but try to make the probabilities more extreme; and they don't tell us anything on their own. It's clear from the whole structure of argument that what's at question is a principle of authority:

      If an expert in a lot of fields relevant to moral philosophy advocates a moral error theory, we should always revise the probability of moral error theory upward and the probability of some of our moral claims downward.

      If this is false, then either (1) it is false because of the 'always' or (2) it is simply false. If it is simply false, we have no reason to think we should respond to the AI the way we suggest -- it's just a super-expert. If it is false because of the 'always', then we need more information to make sure that this kind of scenario doesn't happen to be one of the exceptions. If it's true, then we should indeed do what you suggest.

      Not always, but it depends on the scenario. But it does not follow that we need more information. The info in my scenario incomplete info, but a lot more than just saying that an unspecified expert advocates an error theory. And it's enough to tell that given that info, one should increase the credence of an error theory. I do that assessment, of course, on intuitive grounds, as we usually make epistemic probabilistic assessments.


      But the question, of course, that needs to be addressed, and that can only be addressed by looking at the underlying principles is the same: Why can't someone, recognizing that his being right and the expert being wrong is unlikely, nonetheless, on the basis of content rather than probability, reasonably hold that this is, in fact, one of the unlikely cases? Admitting that you are very lucky, why does the mere fact that you would have to be very lucky pose a challenge to the claim that you actually happen to have been very lucky?

      Sure, in some cases, one may very well do that.
      In the case of the AI with vastly superhuman capabilities, I don't see how that would be doable in practice.
      For example, what assessment of the AI would the person challenge?

      Delete
    19. Brandon, with regard to the point about objecting to the AI's claim on the basis of content, let's make the matter a bit more detailed:

      There is a single AI. For decades, the AI that has so far been very helpful (as intended), has made huge progress in medicine, math, physics, and pretty much everything. It's also made huge progress in understanding the psychology of humans and many non-human animals, successfully predicting behavior (of humans and non-humans) far more frequently than it was ever possible before, and moreover, assigning probabilities that seem to match those frequencies. Similarly, it has made a lot of progress in economics (much better models), and also philosophy, finding hidden logical errors in several philosophical views, etc.

      Now, after studying working human brains, doing experiments with many volunteers, etc., the AI (who is still willing to help people as before) says:

      1. There is no single or majority or even normal (under usual conceptions of "normal") human moral system, and that even under ideal reflection, there will be fundamental moral disagreements all over the place. In other words, the AI says that the disagreements will not be rare, but even after reflection, and in many of the subjects that people actually debate in ethics, there will be huge variance in the assessments of different humans. In fact, the AI says that the disagreement will persist in most cases, and it will be deep (i.e., not just whether something is slightly less or more immoral).

      The AI says it has already superhumanly figured where the disagreement will go.
      Can you think of a way in which a human would properly disagree with 1 based on content? (assume there are no other AI).
      If 1. is conceded, then:

      2. The AI says that it can find no other means of ascertaining moral truth.

      Can you think of a way in which a human would properly disagree with 2 based on content?
      The AI is reporting its own incapability to find other means (i.e., other than a human moral sense, augmented by superhuman capacity for reflection, but that seems inadequate given 1).
      If 2 is also conceded, then:

      3. The AI reckons that probably, either a substantive moral error theory is true, or the human moral system is inadequate as a guide to moral truth, at least in many common cases.

      Can you think of a realistic way in which a human would properly disagree with 3 based on content?

      Delete
  9. Richard,

    Regarding the other posts you just linked to, I just read them, and the discussion that ensued. From what I've read so far (I would have to read some of the posts in greater detail), I would say Carl has raised some of the key objections I would have raised. But I have at least two more (for now), though I'm not sure if you're okay with bringing zombie threads back to life. Is it okay if I reply to those threads?

    ReplyDelete
    Replies
    1. Yes, feel free to reply to old threads! (No guarantee I'll have time/interest to return to them, but I may.)

      Delete
    2. Alright, thanks. Now after further reading, I see that Carl covered more ground than I had originally thought, but I still have a couple of brief extra comments to make I think.

      Delete
  10. Richard, with respect to your point that Alice's view is a priori false, I would say that a priori, her view appears to be on the same boat as the alien orcas' moral views (assuming UNR, the assumptions about (ii), etc.), for the following reasons:

    a. From the perspective of her own a priori's assessments, she will reach the conclusion that the answer to (ii) is "obviously yes".
    b. The alien orcas also make assessments that look intuitively right to them, even if after reflection they reach at least some conclusions very different from the ones the alien elephants reach (both of which are different from the ones we reach, etc.).

    However, you said that the alien orcas are substantively wrong, and irrational, but they can "rightly" dismiss the debunking argument. If I understand your point correctly, here "substantively" applies also to "irrational" (though I'm not sure what sense of "irrational" that might be), since your point that they can "rightly" dismiss the debunking argument indicates that they're not being epistemically irrational.
    Did I misunderstand your view, and you're saying that the alien orcas are being epistemically irrational as well?
    Or are they not being epistemically irrational, even if Alice is? (in which case, I would like to ask what the difference might be, because they seem relevantly similar to me).
    Then again, after reading the other threads, I see you think (or thought back then?) that we should believe each necessary a priori truth. If so, I would conclude that you think the alien orcas are being epistemically irrational. But if so, in which sense of "rightly" can they rightly dismiss the genetic debunking argument?

    That aside, I'm going to give it one last shot if you don't mind: in another post, I'm going to tackle a math case (don't worry, I'll stop afterwards, I'm not going to keep going forever :-)).
    I didn't want to try an example involving math before because of the potential problem of epistemic skepticism, but I think I've found a way around it, by means of carefully (let's hope) limiting the cognitive alterations the demon is likely to do. Also, a modification of Carl's doomsday machine makes a good moral addition. I think the example E3 is better than E2, because:

    1. It doesn't rely on assumptions about (ii).
    2. It targets a true belief.
    3. It has a moral claim attached to it, which I think might make the intuitive case stronger by making the matter more pressing - even if the a priori belief that is challenged by the genetic debunking argument is a non-moral one.

    Granted, going by what you said earlier, it seems I already know your answers - i.e., you'll disagree with my assessment on the case.
    But I can still hope you'll find the debunking intuitive enough in this case!

    ReplyDelete
    Replies
    1. So, here's the math example:

      E3: Azazel, is as before. Bob is a math student who properly reckons with probability almost 1 that:

      3.i. If the die landed 1, then Azazel rigged a bomb to blow up a schoolbus full of children if the following two conditions obtain:

      C3: The Axiom of Choice and Zorn's Lemma are equivalent on the basis of the ZF axioms (henceforth, "AC is equivalent to ZL").
      D3: Bob fails to press a red button before noon.

      3.ii. If the die didn't land 1, then Azazel rigged the bomb so that it will explode if the following two conditions obtain:

      C3': AC is not equivalent to ZL.
      D3': Bob presses the red button before noon.

      Additionally, if the die did not land 1, then Azazel:

      3.I: Gave Bob a false belief about whether AC is equivalent to ZL, modifying memories as required - but not beyond that.
      3.II: If Bob attempts to prove either that AC is equivalent to ZL or it's not, Azazel will modify Bob's short-term memory as well as any writings of the proof, leading her to the wrong conclusion.
      3.III: Azazel did not affect Bob's abilities to do logic and Bayesian updating beyond the alterations in question.
      3. IV. Azazel never gave or will give Bob any beliefs other than the one about AC, etc., or those related to it. In short, no things like ghosts or God or teapots, or any false moral beliefs.

      Let's stipulate that Bob has no way of consulting other people (or books, the internet, etc.) before noon, and also that the bomb has been disabled by a more powerful demon, but Bob has no way of knowing any of that.

      So, Bob contemplates the matter, and his intuitions clearly tell him that AC is equivalent to ZL. Moreover, he does a proof, which he finds persuasive. As it happens, the proof is correct.
      In my assessment:

      a3. Bob is being epistemically irrational if he continues to believe before noon that AC is equivalent to ZL.
      b3. Bob is [at least to some extent; the extreme situation acts as a considerable partial excuse] morally culpable if he pushes the red button.

      But given your previous example in the math case, I take it you think that a3. and b3. are both false?

      Just because, one can add E4: E4 is like E3 but Bob finds himself believing that AC is not equivalent to ZL, and gives a proof that appears persuasive to him (Azazel modifies the writing and short-term memory to make sure Bob gets it wrong).

      Delete
    2. An upgrade of E3 (and E4) would be that Bob only has 1 minute to decide whether to press the button, and 3.II is modified so that Azazel does not alter any of Bob's short-term memories, but only older memories.

      That resolves the potential difficulty that an argument that challenges short-term memories might be self-undermining by undermining Bob's own confidence that his interactions with Azazel happened and his probabilistic assessment was correct, and perhaps even by raising the issue of epistemic skepticism (which I was precisely trying to avoid), though undermining memories is often risky (not just for genetic debunking argument); still, the evolutionary argument does not undermine any memories, so it does not have that problem.

      Delete
    3. "you think the alien orcas are being epistemically irrational. But if so, in which sense of "rightly" can they rightly dismiss the genetic debunking argument?"

      Right, I think it's epistemically irrational (at least so far as ideal rationality is concerned) to be substantively wrong about an a priori matter of fact. So the alien orcas are irrational. But they're right to dismiss the debunking argument, because the debunking argument is a bad argument. It may have a true conclusion: as it happens, their beliefs are unjustified. But the debunking argument gives them no reason to think so, and they correctly appreciate this. Now, if only they were to also correctly appreciate that their views are substantively mistaken...

      "given your previous example in the math case, I take it you think that a3. and b3. are both false?"

      Yep, at least so far as ideal rationality is concerned. Again, I grant that there is a sense of non-ideal "rationality" in which your claims here seem right.

      "E4 is like E3 but Bob finds himself believing that AC is not equivalent to ZL, and gives a proof that appears persuasive to him (Azazel modifies the writing and short-term memory to make sure Bob gets it wrong)."

      E4 is epistemically different from E3, as Bob is not in fact believing as a result of mathematically competent deduction, and his resulting beliefs are unjustified (and when he acts on them, he is morally reckless/blameworthy).

      Delete
  11. Angra Mainyu,

    I have no idea whatsoever what you mean by 'models'. I am talking about modalities. I am very aware that we are talking about epistemic probabilities; my explicit point, which this is now my third time to state, is that in no other context do modalities relate to probabilities the way that you claim epistemic probabilities related to 'epistemic irrationality', which you explicitly state in strongly modal terms -- strong modalities and probabilities simply do not relate this way in general, and you have provided no reason whatsoever to think that they have to do so in an epistemic context.

    Further, it is a bit silly that you are being condescending about epistemic probabilities given that you completely fail to use them in your example:

    For example, X is the hypothesis that two balanced dice in a casino will land 5 on the same roll. They epistemically should assign about 1/36 (well, a bit lower, since a die might land on a side). There is no further info about the dice. Alice assigns a bit over 0.5 (not necessarily a specific number), and Bob assigns over 35/36 (but not necessarily a specific number; I'm using the numbers only because you object to "low" and "high" for some reason I don't understand). Then Bob is being more epistemically irrational than Alice.

    Here we have an epistemic strong modality (two different ones, actually -- 'epistemically should' and epistemically irrational') but no epistemic probability in sight; this is simply an ontological (sometimes called an 'aleatory') probability based on an identification of possible states of the dice. The probabilities that Alice and Bob are assigning are aleatory probabilities to the die, not epistemic probabilities, which would have to measure something about them. To be epistemic probabilities, they would have to be measures of credence, degree of certainty, justification, evidential strength, or, in short, of something epistemic rather than of something ontological.

    This really makes me think that you need to define how you are using 'epistemic probability' here, since I can see no consistency at all in how you use it, and keep having to guess at what you mean, only to have you tell me yet again that you are using the probabilities in an epistemic way, and then immediately go on to say something about them that I don't find intelligible.

    Actually, we can only know principles by intuition, and we test proposed principles by intuitively assessing hypothetical scenarios.

    Unless by 'intuition' you mean to include "explicitly stating them in a natural or artificial language" and unless by "intuitively assessing hypothetical scenarios" you mean "analyzing the stated principles logically and by use of real evidence", no, we don't. No rational person tests claims about the real world by merely asking whether a made-up science fiction story sounds plausible. One can perfectly well use such stories in inquiry, as I have already noted; but tests of principles are based on logic and evidence, not imagination.

    ReplyDelete
    Replies
    1. Brandon,

      By "models" I mean the boxes, etc. you seem to be using to model "should", "ought", etc., which are concepts that we employ usually in our lives, and are not defined in terms of the model.


      Further, it is a bit silly that you are being condescending about epistemic probabilities given that you completely fail to use them in your example:

      The accusation of condescension is false, but that aside, I don't fail to use them.


      Here we have an epistemic strong modality (two different ones, actually -- 'epistemically should' and epistemically irrational') but no epistemic probability in sight; this is simply an ontological (sometimes called an 'aleatory') probability based on an identification of possible states of the dice. The probabilities that Alice and Bob are assigning are aleatory probabilities to the die, not epistemic probabilities, which would have to measure something about them. To be epistemic probabilities, they would have to be measures of credence, degree of certainty, justification, evidential strength, or, in short, of something epistemic rather than of something ontological.

      No, they are making epistemic probabilistic assessments based on the information that they have about the dice. That's the usual use of probabilistic terms. There is no ontological claim here. Purely for example, the probabilistic assessments on how a die rolls remain even after the die already rolled, unless updated with new information.

      For example, if someone rolls a die in a casino, and someone asks me the probability that it landed 5, absent other information, I would reckon it's roughly 1/6, but if I then I look at the die and see it's 5, I upgrade the probability to almost 1 ("almost" in case my senses are failing, but that's extremely improbable). That's all epistemic probability. That's the usual use of probabilities.


      This really makes me think that you need to define how you are using 'epistemic probability' here, since I can see no consistency at all in how you use it, and keep having to guess at what you mean, only to have you tell me yet again that you are using the probabilities in an epistemic way, and then immediately go on to say something about them that I don't find intelligible.

      When people talk about "probable", "improbable", etc., unless the terms are defined in some technical sense or otherwise follows from context (it usually does not), they're employing the terms in the epistemic sense.
      I don't have a definition of epistemic probability just as I don't have a definition of terms like "morally wrong", "morally obligatory", etc.; those terms are intuitively grasped by all of us.


      Unless by 'intuition' you mean to include "explicitly stating them in a natural or artificial language" and unless by "intuitively assessing hypothetical scenarios" you mean "analyzing the stated principles logically and by use of real evidence", no, we don't. No rational person tests claims about the real world by merely asking whether a made-up science fiction story sounds plausible. One can perfectly well use such stories in inquiry, as I have already noted; but tests of principles are based on logic and evidence, not imagination.

      Yes, we do, and no, I don't mean that by "intuition". Rational philosophers regularly tests proposed principles by means of assessing hypothetical scenarios. That involves proposed epistemic principles, moral principles, etc.

      Delete
    2. I am not using Box to 'model' modalities; you asked what I meant by strong epistemic modality and I pointed out that they are what Box is used to model in standard modal logics as an explanation. Moreover, I don't understand why you would have a problem with it, in any case: you keep talking about probabilities in contexts where they can only be models -- that's what epistemic probabilities are, models of epistemic states in terms of probability theory.

      No, they are making epistemic probabilistic assessments based on the information that they have about the dice.

      That is very clearly not what your description indicated. You said they were assigning the wrong probabilities. It's quite an elementary distinction: If I assign the probability of a die landing on 1 is 4/25, that is an aleatory probability. If my assignment of that probability is at fifty percent certainty, or with fifty percent justification, or some such, that is an epistemic probability. You seem to be mixing the two illegitimately.

      When people talk about "probable", "improbable", etc., unless the terms are defined in some technical sense or otherwise follows from context (it usually does not), they're employing the terms in the epistemic sense.

      Sometimes, and sometimes not. When they are talking about coins or dice or 'nine chances in ten', they are not using the terms in an epistemic sense.

      Yes, we do, and no, I don't mean that by "intuition". Rational philosophers regularly tests proposed principles by means of assessing hypothetical scenarios. That involves proposed epistemic principles, moral principles, etc.

      Again, this is not right. Rational philosophers test proposed principles by analysis and evidence; they use imaginative scenarios as scaffolding to find and develop such analysis and evidence, in the ways I've already noted. People who just put forward hypothetical scenarios to assess are science fiction and fantasy writers, not philosophers. Take Dennett's famous 'intuition pumps', for instance; the whole purpose of them is to act as a crutch helping us focus so that we can do a proper logical analysis and can think through more clearly how our evidence actually relates to the topic. Others use similar scenarios to raise ideas vividly that they supplement with further considerations, or to start an inquiry off. But imaginative stories aren't even the right sort of thing to be a test of truth, rational assertibility, or anything else. And the reason is pretty clear: you can make up all sorts of imaginative stories that tend in all sorts of directions, and the governing principle, plausibility, is even weaker than probability and doesn't have to be actually coherent. Ignoring this is like taking pictures as tests of existence and ignoring Escher, trick photography, and Photoshop.

      In any case, you don't seem consistent here, either. Consider your AI story. What is going on within the story? You have an expert-taken-to-the-limit (the AI) who has rigorously analyzed morality and the accounts given of morality and has closely reviewed the evidence relevant to these things; it concludes that this logical analysis and evidential review suggests a moral error theory. You then keep trying to suggest that moral realists would then have to revise their probabilities for moral error theory and for a lot of their moral views. What are these probabilities? Intuitive assessments. Thus the entire story directly suggests that intuitive assessments should be tested by logical and evidential analysis of principles, and should be adjusted in light of them, not that logical and evidential analysis of principles, or even that the principles themselves, should be tested by the intuitive assessments. Otherwise, why are you suggesting that the moral realists should accommodate the logic and evidence of the AI rather than the AI accommodating the intuitive assessments of the moral realists?

      Delete
    3. Brandon,


      I am not using Box to 'model' modalities; you asked what I meant by strong epistemic modality and I pointed out that they are what Box is used to model in standard modal logics as an explanation. Moreover, I don't understand why you would have a problem with it, in any case: you keep talking about probabilities in contexts where they can only be models -- that's what epistemic probabilities are, models of epistemic states in terms of probability theory.

      The problem is as follows.
      Either.
      P1: From the boxes model of epistemic rationality it follows that it's not always epistemically irrational to assign the wrong epistemic probabilities, or
      P2: It is not the case that P1.

      In case P2 is true, then I have no objection to the model, but in that case, your argument based on the boxes model against my assertion that it's epistemically irrational to assign the wrong epistemic probabilities, fails.
      In case P1 is true, then the model is inadequate to model the epistemic rationality, as it denies a transparent conceptual truth.


      That is very clearly not what your description indicated. You said they were assigning the wrong probabilities. It's quite an elementary distinction: If I assign the probability of a die landing on 1 is 4/25, that is an aleatory probability. If my assignment of that probability is at fifty percent certainty, or with fifty percent justification, or some such, that is an epistemic probability. You seem to be mixing the two illegitimately.

      The wrong probabilities, in this context, means the wrong epistemic probabilities. That's what I'm talking about. The only reason I used numbers is that you objected to my probabilistic assignments without specific numbers. Also, If by "aleatory probability" you mean "frequentist probability", I actually wasn't using "probability" in that sense. I don't agree with the frequentist interpretation of probability.

      I do think that usually observed frequencies generally act as a guide to proper epistemic probabilistic assessments.
      Now, epistemic probabilities may or may not assign specific numbers. Usually, they do not. In the case of the die, in my example I did not claim they assigned specific numbers.
      In fact, I said "Alice assigns a bit over 0.5 (not necessarily a specific number), and Bob assigns over 35/36 (but not necessarily a specific number; I'm using the numbers only because you object to "low" and "high" for some reason I don't understand). ".


      Sometimes, and sometimes not. When they are talking about coins or dice or 'nine chances in ten', they are not using the terms in an epistemic sense.

      That's a frequentist interpretation, it seems to me.
      I would need more context (as I said, it might follow from context that it's not an epistemic use), but I'd say in most cases, that interpretation is incorrect, at least if terms like "probability" are used.
      One way to see it is as follows: let's say they already tossed the coin, and they had said "1 in 2 chances" of landing tails (up).
      So, if you ask them (after that, but before they see the coin), what are the odds that it's tails, they'll say "1/2", or "0.5", etc., but after they see it landed heads up, they'll say "0" (well, very probably; in reality, they're assigning numbers that roughly" reflect the epistemic probability, or else they got it slightly wrong).

      Delete
    4. Brandon,


      Again, this is not right. Rational philosophers test proposed principles by analysis and evidence; they use imaginative scenarios as scaffolding to find and develop such analysis and evidence, in the ways I've already noted.

      You say "analysis and evidence" as if that excluded intuitive probabilistic assessments, but that's precisely what they do. But at this point, we would be going back and forth asserting opposite views, so I'll leave it there - i.e., I could repeat what I said, you could repeat what you said, etc., but clearly that would not advance any discussion.


      In any case, you don't seem consistent here, either.

      As I said, if you can derive a contradiction from anything I said, I would ask you to write down the argument.


      Consider your AI story. What is going on within the story? You have an expert-taken-to-the-limit (the AI) who has rigorously analyzed morality and the accounts given of morality and has closely reviewed the evidence relevant to these things; it concludes that this logical analysis and evidential review suggests a moral error theory

      Actually, the expert concludes more than that. I made a more detailed account so that you could tell me where the argument fail. But your replies do not address that, but rather, make the comparison with a vaguer case of an expert or superexpert.

      And again, you seem to oppose intuitive assessments to evidential analysis, whereas I'm talking about intuitive probabilistic assessments as a way of analyzing evidence in the first place. Indeed, that is how evidence is analyzed. How would you expect a judge of fact (whether a juror or a professional judge) to assess the evidence? They need to assess whether (for example) the probability that the defendant did it is so high that it's beyond a reasonable doubt. But theory is always underdetermined by observations. They need to assess the matter on the basis of their epistemic intuitions. And they do.

      Delete
  12. The problem is as follows.

    I don't understand any of this 'problem'. As I explicitly said, I'm not appealing to any model in modal logic; there are very obvious reasons why I wouldn't -- since you have not bothered to explain your use of modalities properly, there is no way to determine which model would be appropriate, for instance, and, more importantly, the entire point of everything I have said on the subject has been to try to figure out what you mean.

    The wrong probabilities, in this context, means the wrong epistemic probabilities. That's what I'm talking about. The only reason I used numbers is that you objected to my probabilistic assignments without specific numbers. Also, If by "aleatory probability" you mean "frequentist probability", I actually wasn't using "probability" in that sense. I don't agree with the frequentist interpretation of probability.

    By 'aleatory probability' I mean aleatory probability; it's a standard term in discussions of probability, due to Hacking, and epistemic probability is very often explained in distinction to it. And I didn't object to your "probabilistic assignments without specific numbers"; I pointed out how your terms were completely useless for determining what you meant, since you didn't (and still don't) define them. And, again, the actual structure of the scenario you gave involved the assignment of numbers not to epistemic states, as it would if you were consistently talking about epistemic probabilities, but to states of dice, which are not by any stretch of imagination epistemic states.

    You say "analysis and evidence" as if that excluded intuitive probabilistic assessments, but that's precisely what they do....And again, you seem to oppose intuitive assessments to evidential analysis, whereas I'm talking about intuitive probabilistic assessments as a way of analyzing evidence in the first place.

    I have several times now pointed out that intuitive assessments can contribute indirectly to logical analysis and evidential review, just not in the way you claim, so trying to attribute to me the claim that they are "opposed" is absurd. I think they mesh very well together, if you don't try to pretend that fictional stories are evidence or that feelings of plausibility are reliable without principles to support them.

    Actually, the expert concludes more than that. I made a more detailed account so that you could tell me where the argument fail. But your replies do not address that, but rather, make the comparison with a vaguer case of an expert or superexpert.

    Since I have never at any point said that your argument fails, your expansion was irrelevant. My explicit argument had been that one can only determine how a scenario like this works by asking the question: What makes an intuitive assessment in this kind of case go beyond merely guessing? And as I noted, there appears to be nothing except general principles about expertise, on which any interpretation of the scenario seems inevitably to depend. You have provided no alternative or supplementary principles, as far as I can see; you've just made up more story details off the top of your head and are trying to get me say whether your story is plausible, despite the fact that I have been explicitly arguing that plausibility need not be reliable or even coherent and thus doesn't seem to tell us anything in this context.

    And, in particular, since the question at hand is about a specific kind of argument governed by debunking principles, the entire topic at hand is whether there are cogent debunking principles. Everything you have argued seems to be just an excuse not actually to discuss what principles could possibly be used in a viable debunking argument.

    ReplyDelete
    Replies

    1. By 'aleatory probability' I mean aleatory probability; it's a standard term in discussions of probability, due to Hacking, and epistemic probability is very often explained in distinction to it.

      That's what I thought; it's the frequentist interpretation. My previous reply applies.


      And I didn't object to your "probabilistic assignments without specific numbers"; I pointed out how your terms were completely useless for determining what you meant, since you didn't (and still don't) define them

      You didn't "point out" that my terms were completely useless; you claim that that is so. But this has gone on for too long in my view; clearly, there is not going to be any sort of agreement between us, and I'm satisfied I've already made my points clearly enough for any interested readers (but if not, they're welcome to ask), so I'll leave it at that.

      Delete
    2. It is in fact controversial how far Hacking's aleatory probability relates to frequentist interpretations of probability, in part because the former is understood functionally and in part because there is, contrary to the way people often talk, no single frequentist interpretation of probability, just a family of such interpretations. The point of bringing it up was entirely as a contrast case to epistemic probabilities in the (futile) attempt to get you to explain how you were using epistemic probabilities to get conclusions with epistemic modalities.

      You didn't "point out" that my terms were completely useless; you claim that that is so.

      I have difficulty believing that you are as dense as you are presenting yourself here. As the passage you quote explicitly says, what I pointed out was not that your terms were completely useless, but that they were completely useless for determining what you meant, which has been the entire topic of discussion all along. And the entire result is that you have explicitly stated that you don't know how to define any of your terms, despite claiming that you understand them well enough to get conceptual truths -- which ordinarily depend on definitions -- from them; when I raise questions to try to clarify, you simply give me the runaround, and when I point out that this practice is not helpful for figuring out what you mean, you complain. Thus your satisfaction at your clarity seems utterly groundless.

      Delete
    3. Brandon,

      I wasn't going to reply anymore, but I guess you convinced me.


      As the passage you quote explicitly says, what I pointed out was not that your terms were completely useless, but that they were completely useless for determining what you meant, which has been the entire topic of discussion all along.

      First, no, that's not the entire topic of discussion. You've been raising a lot of objections to my arguments, and misrepresenting what I'm doing.
      Purely for example, you falsely claimed: "In other words, you are not doing anything that can possibly establish whether the genetic debunking argument is actually successful; you are merely seeing whether you can state a genetic debunking argument in such a way that it looks successful to someone, even if it's not."

      Second, of course I didn't define terms like "epistemically irrational", but for that matter, I wouldn't define "morally wrong", either. But that does not mean that my replies are useless at determining what I mean: examples of usage are not useless at determining what I mean, if it's a commonly used term.


      And the entire result is that you have explicitly stated that you don't know how to define any of your terms, despite claiming that you understand them well enough to get conceptual truths -- which ordinarily depend on definitions -- from them; when I raise questions to try to clarify, you simply give me the runaround, and when I point out that this practice is not helpful for figuring out what you mean, you complain. Thus your satisfaction at your clarity seems utterly groundless.

      That's not at all what happened. But that's on record, so I'll suggest reading the exchange again.

      Delete
    4. I have not been raising objections to your arguments; I have been explicitly expanding on the two starting sources of my puzzlement in response to your repeated statements that you didn't understand them and then your dogmatic insistence that you didn't need to deal with them. Indeed, it's very easy to lay out the discussion this way:

      (1) I started by asking what principle you used to get the strong epistemic modalities in your conclusions from the probabilities in your premises.
      (2) You said you didn't know the premises, you were just trying to trigger intuitions with hypothetical scenarios.
      (3) I said I didn't understand how you could use such intuitions to address the point in the post, which is about principles, and that I still didn't know how you were getting the modalities from the probabilities.
      (4) You said that we usually don't know the principles in our assessments, and that by epistemically irrational you meant assigning the wrong probability, and asked a number of questions to clarify what I meant.
      (5) I said I didn't see how the lack of knowledge is relevant since we were in this case dealing with arguments that require specific principles to work, and that determining whether they are, e.g., self-defeating, depends on facts about the principles; and that your account of epistemic irrationality still left unclear how you were getting that modality from the probabilities without an additional principle, gave an argument for thinking the problem causing my lack of understanding still existed; and answered your questions.
      (6) You responded by saying you didn't understand what I was asking.
      (7) I expanded by adding more reasons to think that the problem still existed.
      (8) You responded at length on intuitive assessment and said you still didn't understand what I meant about epistemic irrationality.
      (9) You then addressed each of my reasons and said why you didn't understand them.
      (10) I responded by distinguishing the two issues of intuitive assessment (essential for determining how your claim is distinguished from just saying the lucky ticket is lucky), including arguing that principles were actually doing the work, and epistemic irrationality (how do you get the strong modalities from the probabilities).
      (11) You said that the probabilities you were talking about were epistemic probabilities, and said you didn't understand my objection.
      (12) You then insisted on your method of intuition was the only way and simply denied my account of the problem.
      (13) You then expanded on your AI example with more details.
      (14) I said that you don't seem to be treating the probabilities consistently as epistemic probabilities, and denied that your method made sense without looking at principles.
      (15) You denied that you were inconsistent on epistemic probabilities, and rejected my denial that your method made sense.
      (16) I pointed out that you were using probabilities both for states of dice and epistemic states as if they were both epistemic probabilities. I developed my problem with your method, without principles, at greater length.
      (17) You insisted you are talking about epistemic probabilities; you claimed I was treating intuitive assessment and analysis as opposed.
      (18) I clarified what I meant by my distinction on probabilities; I denied that I was treating intuitive assessment and analysis as opposed -- they just aren't functionally the same.
      (19) You pronounced yourself satisfied with your clarity on everything.

      The only possible exception is my increasingly frustrated sarcasm about your method of proving things by fantasy stories.

      Delete
    5. Brandon,

      Among other things (there was more to the exchange):

      1. You started by saying you were puzzled, claiming that my argument (which you dismissed as a "challenge" in quotation marks) seemed to amount to no more than saying that the lucky ticket in this case was the lucky ticked.
      2. I didn't engage at that point, and continued my conversation with Richard.
      3. You insisted on your puzzlement, and said that there seemed to be some implicit epistemic assumption that's doing all the work. You further claimed that it in order to determine whether "one can get around the self-defeat of genetic debunking", one needs to know what exactly the epistemic assumption is.
      4. At that point, you seemed to assume there was a "self-defeat" to get around in the first place. But that places the cart before the horses, since usually one does not need to list any such principles (which, in any case, when proposed are tested by considering hypothetical scenarios and assessing them intuitively).
      What I didn't realize at that point was what you actually wanted the principle to do. I thought you were asking for a principle to do the work that the principle in Richard's interpretation of Street's argument does. But I didn't realize you were trying to use a principle to go from assigning the wrong epistemic probability to epistemic irrationality, because that seems like a principle to go from acting in a way that is not according to one's moral obligations to acting immorally (i.e., it seems like a conceptual truth).
      5. I replied that "The objective of my scenarios is to trigger clear intuitive epistemic assessments. I don't claim to know what principles underlie them.". I gave an example of my exchange with Richard - who disagreed with some of my arguments, but wasn't raising the same sort of objection, so we were communicating successfully.
      6. You insisted on asking for a general principle, and said you were skeptical about those scenarios.
      7. I granted that scenarios including tampering are problematic; I explained why I picked them, but pointed out that the original arguments did not have it (eventually, I would construct an AI case that didn't have it, either), but pointed out that one doesn't usually know the principles in the first place.
      I also said that assigning a wrong probability is an instance of epistemic irrationality.
      8. You thought I was talking about everyday probabilistic assessments when I said that one usually does not know the principles. You also thought that when I said "assigning a wrong probability is an instance of epistemic irrationality.", that was a principle that would do the work, but said it was obviously false.
      Additionally, you said it was a problem
      9. I said I was "not talking about everyday probability arguments only, but philosophical arguments as well. I'm not sure why genetic debunking arguments of some specific beliefs would be an exception".
      I also said I didn't understand the objection you were raising, since "That assigning the wrong probability is an instance of epistemic irrationality (maybe the only kind, but I want to leave the door open for other options) seems to be a conceptual truth."

      Delete
    6. 10. You said that the intuition that I intend to trigger (which would be the same intuition) is as follows: in the hypothetical scenario in question, the genetic debunking argument (rather than some other stuff) succeeds", and replied to your argument against the conceptual claim
      I further asked for clarification about what you meant by "strong epistemic modalities".
      12. You said "in other words", etc., claiming that what I was doing was something you described, interpreted the AI example in a specific way, and said that from that the assessment is that we needed more info.
      You also raised further objections from my claim that it was a conceptual truth, and said strong modalities were represented by a Box operator in a certain way, etc.
      13. I objected to your characterization of what I was doing, rejected your assessment of the AI scenario (reasons given here).
      I also replied to your objections to my conceptual point, and in particular replied to your argument using the modalities represented by boxes.
      14. Among other things, you said I wasn't using epistemic probability at all (in the examples), asked me to define it, and objecting to my claim that "we can only know principles by intuition, and we test proposed principles by intuitively assessing hypothetical scenarios.", said you "can see no consistency at all in how you use it", said you had to "keep having to guess at what you mean, only to have you tell me yet again that you are using the probabilities in an epistemic way, and then immediately go on to say something about them that I don't find intelligible.".
      Furthermore, you accused me of condescension.
      15. I objected to your claims about principles and intuitions, and also argued that I was indeed using epistemic probability. I rejected your accusations, but at that point, I was also getting frustrated with the way you were replying to me.
      16. We kept debating the issue of the boxes and the conceptual claims, and whether or not I was using epistemic probability.
      17. You kept telling me I didn't seem to be consistent, and I kept asking you to derive a contradiction from anything I said - which you did not.
      18. You became increasingly frustrated and used sarcasm.
      19. I became increasingly frustrated by the exchange, especially your characterization of what I was doing, and the use of sarcasm. I realized that was not a productive way for me to discuss a matter, so I decided not to raise further objections to most of your points and let it go. I was clearly not planning to continue. However:
      20. You insisted on the "dense" part, which upset me enough to reply again.
      21. You made an account of the exchange, and accused "proving things by fantasy stories".
      22. I made an account of the exchange - and of course, I find your accusation out of place.
      23. Anything else?

      Delete
    7. Look, I tried to discuss the matter with you, and I get you tried to discuss the matter with me.
      It didn't work out, because we ended up not only disagreeing - which often happens -, but both getting more frustrated with each other - which doesn't usually happen in philosophy blogs -, and I suspect also talking past each other on a number of points.
      Given that, it seems to me that there is unfortunately not much room left for any discussion that advances the substantive matters, though we can keep arguing about who said what, and having a bad time while we're doing it.
      Unless you have a better idea as to how to proceed, I think it would be for the better to leave it at that.

      Delete
    8. I don't understand what your listing is supposed to show; all the points that you attribute to me are in the comments they describe attempting to explain the two sources of puzzlement: how you get the modalities from the probabilities (which starts primarily by my trying to explain why the modalities seem too strong for the premises and ends by my trying, without any success whatsoever, to figure out how you understand 'epistemic probabilities' in getting the modalities in the first place, given that you repeatedly say things about them that don't seem to mesh with any account of epistemic probabilities I have ever come across), and how you can restore the challenge in the face of the question about self-defeat by your method of getting intuitive assessments about stories without looking at the underlying principles that would be involved in the arguments. (Contrary to your suggestion, I at no point assumed that the arguments in question involved self-defeat because a major element of my argument throughout has been that I don't know what the arguments actually are intended to be; the question is how are you ruling out that the possibility there is still self-defeat without actually looking at the principles involved in the arguments.) This includes even the frustration aspect, the lines of which are specifically about what you mean by epistemic probabilities (which is not clarified by saying that they are 'epistemic' since it has been obvious from the beginning that they would have to be epistemic probabilities -- the question was how you get the strong epistemic modalities from them, given that epistemic probabilities seem only to get one weak epistemic modalities) and how your method is supposed to address the original issue of showing an argument with no self-defeating principles without ever looking at principles. So here, as elsewhere, I don't know what your point is. How am I supposed to draw any conclusion from an unexplained list? You seem to leave the 'how' out a lot.

      I have been quite consistent all the way through the discussion about what I've been asking; and I am not really any less in the dark about how your argument is supposed to accomplish what it claims to accomplish than I was in the beginning. But you don't, of course, have any obligation to explain yourself to me or anyone else.

      Delete
    9. Brandon,

      Among other things, it's supposed to show that it's not the case that the entire topic of discussion was whether my terms were completely useless for determining what I meant and/or that the entire topic was determining what I meant.
      Rather, in addition to the question of what I meant, you've been raising objections to a lot of what I was doing (i.e., claiming that what I was saying was false and/or that I was being inconsistent and/or that I wasn't using the sort of probability I said I was using, etc.), raising accusations, etc.

      With regard to your claim "how you can restore the challenge in the face of the question about self-defeat by your method of getting intuitive assessments about stories without looking at the underlying principles that would be involved in the arguments", that is wrong-headed.
      In fact, Richard only showed that a specific genetic debunking argument (namely, his interpretation of Street's argument) was self-defeating, not that there was a general problem with genetic debunking arguments - which it clearly isn't. If Richard interpreted Street's argument correctly, then he's right about that argument, but the crux of her argument does not seem to require any such principles - in fact, it can be done by means of an argument based on intuitive probabilistic assessments. But you seem to assume there is a self-defeat to get out of.

      "Contrary to your suggestion, I at no point assumed that the arguments in question involved self-defeat because a major element of my argument throughout has been that I don't know what the arguments actually are intended to be; the question is how are you ruling out that the possibility there is still self-defeat without actually looking at the principles involved in the arguments."
      But you do suggest that they involve self-defeat - even if you don't outright claim so - by suggesting I have the burden of restoring the argument on the face of a self-defeat challenge. That makes the self-defeat issue a live option, whereas I'm merely making an argument using intuitive epistemic probabilistic assessments, which is in general fine unless there is a specific reason to think self-defeat is involved.

      By the way, the AI argument wasn't meant to be just a "superexpert" argument, but a genetic defeat argument, since it made specific claims about what the AI discover would happen in case of reflection (i.e., disagreement would persist, etc.). I elaborated more in this post (the genetic debunking part is point 1.; the rest is to prevent potential objections).


      I have been quite consistent all the way through the discussion about what I've been asking; and I am not really any less in the dark about how your argument is supposed to accomplish what it claims to accomplish than I was in the beginning. But you don't, of course, have any obligation to explain yourself to me or anyone else.

      I had no trouble explaining it to Richard, it seems - we disagreed in the end, but we had what seems like a productive conversation -, and I'm willing to explain it to someone else if they ask.
      I tried to explain it to you, but that did not work, maybe because we were looking from very different perspectives and then the matter of mutual frustration got in the way, maybe for some other reason, but I think I've dedicated more than a reasonable amount of time to address your objections, questions, charges, etc.

      Delete
    10. Brandon,

      I can try once more with the AI example.
      Let's say that one of the many cases in which the AI says even under ideal reflection (i.e., consistent reflection, all of the information that is relevant for that person to make a moral judgment is available) from their current perspective, human judgments will vary widely is the following case:

      C1: The issue is the morality of abortion because the woman doesn't want a child and isn't in a position to properly raise a child, and isn't in a position to give a child in adoption with a high chance that the child will be properly raised.

      The AI superhumanly reckons (with extremly high probability; close to 1) that:

      A(C1): Under ideal reflection, some people will judge the behavior obligatory, others permissible but neither obligatory nor praiseworthy, others permissible and praiseworthy, others immoral, but to a low degree, and others heinously immoral. The AI further says that none of those views will get more than 1/4 of assessments, either among present-day humans (the AI took a huge sample), or among present-day normal humans, under any common concept of "normal" [I can try to leave aside the "normal" part if that will be your objection]

      What should Alice do, facing the AI's assessment?
      I guess Alice might challenge the AI's claim about what humans would do after ideal reflection, though that would be difficult to do. But in order to mirror evolutionary debunking argument and the objections to them that we are considering, let's say that that much is granted.

      Should the Alice reduce their confidence in her assessment of C1? (at least, and leaving aside an error theory for now).
      It seems intuitively clear to me that she should, because:

      1. A(C1) shows that the human moral sense (even after ideal reflection) is generally not a reliable guide to moral truth on C1, since most humans/normal humans would get it wrong even after ideal reflection, even if to different degrees.
      2. On the basis of that (i.e., barring other pieces of evidence), Alice should give a low probability to the hypothesis that her own moral sense will get it right on C1, and on the basis of that, reduce the credence in her belief on the matter.
      3. She does not have other pieces of evidence that would counter that piece of evidence, and on the basis of which she should or rationally may increase the credence in a significant way. In particular, reflecting on C1 by her own lights is not an adequate counter to A(C1), since her own moral sense is at stake.

      So, that seems like a successful debunking argument.
      Note that there is no particular reason to suspect that it's self-defeating, since the AI's conclusion only questions the reliability of the human moral sense, not of other human faculties.
      There are potential objections here (I suspect based on the "normal" part, but that objection in turn also faces significant difficulties), but self-defeat isn't among them. Still, if you do want to raise an objection, okay, I'm willing to consider it; still, if the objection is that I need to define the terms "rationally", "epistemic rationality", etc, and/or explain the principles that are doing the work, I will admit I have nothing further to add.
      As I see it, the argument works as it is, on intuitive grounds, even if some objections (e.g., based on the "normal" issue; but that's not a problem in the evolutionary debunking case) may require further consideration.

      Delete
    11. Rather, in addition to the question of what I meant, you've been raising objections to a lot of what I was doing (i.e., claiming that what I was saying was false and/or that I was being inconsistent and/or that I wasn't using the sort of probability I said I was using, etc.), raising accusations, etc.

      My original list was given an explicitly stated function: it showed that you can lay out the course of the argument in terms of the two points I had noted, which recur throughout. Logically, what would be required to deal with this would be to give reasons for taking the points not to be directed towards these, which a simple list is simply not structured to do. How could it possibly do so? But this seems to be a pointless argument; you seem to think I am simply lying about what I was trying to do, and if so, there is no point in arguing the matter further.

      I had no trouble explaining it to Richard, it seems - we disagreed in the end, but we had what seems like a productive conversation -, and I'm willing to explain it to someone else if they ask.

      It takes no great survey of the comments thread to see that Richard was not asking the questions I asked. And even that aside, you surely recognize that what is clear to someone might not be clear to someone else, due to any number of differences. But again, you have no obligation to enlighten me in particular. I just had puzzles at the beginning that the discussion has done nothing to resolve -- indeed, has only made more baffling. Sometimes it happens.

      On your new version of the AI example, the details are different, but what has changed structurally? Note that it is not the AI's argument that is the debunking argument: the debunking argument is something being built on the AI's argument as a datum, so the principles used by the AI are not the principles of the debunking argument. And it is in the supestructure that there seems the least change.

      For instance, what principle grounds your intuitive assessment (1) if not the sort of principle of authority previously noted? (Neither Alice nor you have any access to the reasoning in question; it all has to be taken on the authority of the purported AI.) This seems the most immediate thing relevant to whether Alice should revise her belief.

      How do you move from a statistical claim about the general population in (1) to a claim about a Alice's moral sense in particular in (2) without making the actual content of Alice's moral sense a relevant issue? Usually you can't do move from general claims about the population to specific claims about a particular case without looking at the details of the particular case. What makes this different?

      How is the content of her own belief not a piece of evidence potentially relevant to (3)? Consider Descartes's cogito argument: the content of one's perception is in fact relevant to the question of whether it can be held despite the evil deceiver, not just despite the fact that our abilities are put in doubt but precisely because of that fact. Almost nobody thinks Descartes's argument goes wrong here, at the cogito itself, so how does that mesh with the general principle of evidence that has to underlie (3)?

      How is moral sense not relevant to Alice's evidence assessment given that we regularly talk about evidence assessment in moral terms (fairness, openmindedness, intellectual integrity, etc.)? Notice that you explicitly make the assumption that the moral sense is sharply separable from other human faculties so that putting it in doubt leaves the rest intact. And you are right to raise it in the context of self-defeat, because it is an immediate worry if we are talking about self-defeat. But all we have here to block the worry is an assumption.

      I only give these as examples. Structurally we still seem to have the same kinds of problems for the same kinds of reasons.

      Delete
    12. And since there seems to have been a lot of confusion on this point previously, by saying it has the same kinds of problems for the same kinds of reasons, I do not mean that these are things showing that the debunking argument is self-defeating, which has never been my interest; I mean that these are things raising serious questions about how your entire use of the AI example can address the question of self-defeat at all without looking at the actual principles involved in the actual debunking argument.

      Delete
    13. Brandon,


      But this seems to be a pointless argument; you seem to think I am simply lying about what I was trying to do, and if so, there is no point in arguing the matter further.

      No, I don't think you were lying. I'm not talking about your belief about what you were trying to do, but about what the exchange was actually about. For that matter, you imply my characterization of what the exchange was about is mistaken (since your characterization is incompatible with mine). Should I say that you seem to think I'm lying? It seems clear to me that the answer is negative. Sometimes, people actually disagree about what the exchange was about. It happens - and it happens more often if people are frustrated.


      It takes no great survey of the comments thread to see that Richard was not asking the questions I asked.

      Certainly. My point was about my willingness to explain my arguments to someone who asks questions about it.

      And even that aside, you surely recognize that what is clear to someone might not be clear to someone else, due to any number of differences. But again, you have no obligation to enlighten me in particular. I just had puzzles at the beginning that the discussion has done nothing to resolve -- indeed, has only made more baffling. Sometimes it happens.
      Of course, what is clear to someone might not be clear to someone else. And my argument isn't clear to you.
      But I did try to clarify my argument to you. Again for whatever reason, that did not work out, and yes, I agree that sometimes it happens. But it's not as if I didn't try - in fact, I dedicated a considerable amount of time to this, and I've not figured out any way to make you less puzzled by the argument.
      So, I'm not saying I didn't want to explain the argument to you, or replying that I don't have an obligation to explain myself.
      Rather, I'm saying at this point, I reckon that I probably can't explain the argument in a way that would make you not puzzled by it, or even convince you that the argument is more than saying that the lucky ticket is lucky and/or that it's not mired with all sorts of errors. I make that assessment on the basis of our exchange so far and your replies to my posts. Still, I will keep trying with the AI example at least (see the following posts), even if it looks like a long shot. Maybe I can change the approach a little bit; I'll try.

      Delete

    14. On your new version of the AI example, the details are different, but what has changed structurally? Note that it is not the AI's argument that is the debunking argument: the debunking argument is something being built on the AI's argument as a datum, so the principles used by the AI are not the principles of the debunking argument. And it is in the supestructure that there seems the least change.

      For instance, what principle grounds your intuitive assessment (1) if not the sort of principle of authority previously noted? (Neither Alice nor you have any access to the reasoning in question; it all has to be taken on the authority of the purported AI.) This seems the most immediate thing relevant to whether Alice should revise her belief.

      You mean how did the AI manage to figure out where human assessments would go, after reflection?
      Yes, Alice does not know that, and that part would be like an authority argument, though I added conditions that would likely make her less suspicious of the authority (like the AI's behavior). Perhaps, I shouldn't have added those conditions, because that may have drawn your attention to secondary features of the argument (you seem to consider that to be central; that was not the intention).
      However, that was not the central point here. It's a side issue, because I'm trying to mirror the evolutionary debunking argument (EDA) and the objections Richard raised to it.
      Now, doubting that the AI is right about where the human disagreement would go under ideal reflection is akin to (in the EDA) to doubting M1 or M1', or doubting Street's assertions about what one should expect evolution to produce.
      It's a reasonable issue to raise if one intends to object to genetic debunking arguments like those, and if Richard had raise such issue, the discussion would have been redirected to the matter of what evolution would yield, why, and so on. That was actually one of my goals - to deflect the "self-defeating" objection and steer the discussion on EDA towards the question of what to expect from evolution.

      However, given that Richard grants M1 (or Street's points about evolution) for the sake of the argument and says that even then, the EDA fails, the analogous AI situation is that Alice grants (for the sake of the argument, if not for other reason) that the AI is right about where ideal reflection by humans would lead to, but she rejects any claims by the AI about an error theory, or about the warrant of her own beliefs. Still, we don't need such claims.
      Perhaps, the following AI variant will make the central issue more clear: Let's say that the AI has yet to make any claims about the probability, truth, etc., of Alice's (or other humans') beliefs. The AI only makes the point about where human assessments would go after ideal reflection, and claims to have found no reliable guide to moral truth in case Alice asks it.


      How do you move from a statistical claim about the general population in (1) to a claim about a Alice's moral sense in particular in (2) without making the actual content of Alice's moral sense a relevant issue? Usually you can't do move from general claims about the population to specific claims about a particular case without looking at the details of the particular case. What makes this different?

      Now I'm not sure what you're asking. If you're asking for a general principle, I don't have it, and I'm not sure what you mean by (1) and (2) here.

      I will wait for clarification before proceeding. Meanwhile, I will address a point I understand:

      Delete

    15. Notice that you explicitly make the assumption that the moral sense is sharply separable from other human faculties so that putting it in doubt leaves the rest intact. And you are right to raise it in the context of self-defeat, because it is an immediate worry if we are talking about self-defeat. But all we have here to block the worry is an assumption.

      Richard's self-defeat argument does not rely on any suspicion that moral assessments are not separable from other faculties. For example, he grants that the alien orcas - or other beings - are consistent in their assessments, and lead them to different moral conclusions. The analogous case here is that there are different sets of moral assessments that would be consistent with the rest of the human faculties.
      You might raise a different objection, suggesting that there is some self-defeat, but then:

      a. That's a different self-defeat argument, which has not been given.
      b. You seem to assume there is a self-defeat worry to beat. But unless you have a general argument showing that genetic debunking arguments have to deal with a self-defeat worry, I don't see why there would be a self-defeat worry in the first place.

      Delete
    16. Brandon,

      With regard to the AI case, I think a color example might at least help explain what I'm trying to do:

      Let's stipulate that a picture of the dress is seen as black and blue by about 50% of people (who otherwise seem to have normal color vision), and white and gold by about the other 50% (the numbers weren't actually those in a real case, but I'm considering a hypothetical scenario).
      A question is: what's the picture's (not the dress) color?
      To be precise, let us say that the picture those people are looking at is the same: an image on the screen of the same computer.
      Let's say that the picture has been looked at by a randomly picked sample of the population of people usually considered to have normal color vision, composed of millions of people.
      Absent other relevant evidence, and before looking at the picture, Alice should assign no greater credence to the hypothesis that it's white and gold (WG) than to the hypothesis that it's blue and black (BB). There are other hypotheses - e.g., there is no fact of the matter as to what color it is -, but it seems intutively clear that Alice should not assign a higher probability to one of them over the other.
      Now, I'm not proposing a general indifference principle. I'm making an intuitive epistemic probabilistic assessment on the matter. If you do not find it persuasive, then I don't have any further examples to offer.
      But let's say that Alice looks at the picture, and the dress looks white and gold to her?
      Is that new piece of evidence a relevant piece of evidence that would warrant her assigning a significantly (as opposed to just minuscule) higher probability to WG?
      I reckon the answer is negative. Going by the evidence about what others say, she might assign a slightly higher probability to WG (factoring in hugely improbable scenarios like a massive hoax with millions of people in on it), but not much.
      If she looks at the image before she gets the info about what others say, I reckon that she should reduce the probability she assigns to WG, from close to 1 (she usually is justified in trusting her faculties), to at most roughly 1/2 (I'd say lower, because of the probability of "no fact of the matter", but that aside). At any rate, even if you reckon may properly assign more than 1/2, it's (I hope) intuitively clear that she should assign much less than before she got the evidence about the others.

      So, without knowing the principles, I'm warranted in making those intuitive probabilistic assessments (if you disagree, let me know).
      In the case of (C1), mere disagreement between humans would not have the same impact on Alice's proper probabilistic assessments. But when AC(1) is also granted (which is a statement also about what would happen under ideal reflection), then I reckon Alice should significantly reduce the credence of her belief on the matter.
      That is again an intuitive probabilistic assessment. I could propose some principle or another, but that would only add further claims and speculation, reducing the chances that my argument is right. The argument works on intuitive grounds as it is - or it doesn't, if you think otherwise: we can disagree of that of course, but I hope at least you don't find what I'm doing puzzling even if you disagree on the assessment (granted, our previous exchange suggests it's not going to work, but I'm trying).

      Delete
  13. Richard's self-defeat argument does not rely on any suspicion that moral assessments are not separable from other faculties.

    I don't see that this is relevant. (1) I've been taking you to be making much stronger claim than saying merely that the challenge, if self-defeating, be self-defeating on different grounds than Richard suggested. Am I wrong in this? Are you in fact intending not to show that genetic debunking arguments can work, but only that Richard's objection and objections similar to it are not on their own able to raise worries about all of them? (2) Your AI argument also makes the argument, insofar as it addresses worries of self-defeat, rely on an epistemic principle that is at least controvertible, and which certainly cannot be derived merely from the scenario despite being essential for interpreting it one way or another. And the question I have been explicitly asking is how your approach is supposed to show that an argument is not self-defeating given that it does not involve looking at whether its principles are self-defeating. And here we see one case in which you seem to have to add in information about a principle in order to block one particular possibility of self-defeat.

    You seem to assume there is a self-defeat worry to beat.

    This is incorrect. The question at hand is how your approach, using intuitive assessment of scenarios without looking at the actual principles of the arguments, establishes that there are genetic debunking arguments that avoid having self-defeating principles. I am not assuming anything about whether there is actually self-defeat or even a general worry about it; I am asking how you can establish your claims about the argument using the approach you use. How does this approach actually let one determine that an argument is not self-defeating? If someone were already worried that it is self-defeating (maybe that's their initial intuitive assessment of it), what does your argument actually do to eliminate that worry?

    So, without knowing the principles, I'm warranted in making those intuitive probabilistic assessments (if you disagree, let me know).

    Warrant is a matter of grounds and support; I don't know what the grounding or support for your intuitive probabilistic assessments are, or even the grounding or support for taking them to be generally reliable in these kinds of matters, so I don't know one way or another whether they are warranted. Unless you mean something else by 'warranted' than I do?

    One question I keep coming to is something like this. Suppose you have a twin, Ahriman, who makes the same assessments that you do, but does so simply by guessing in ways that are not tied to whether the guesses are right or not. (He, of course, calls the guesses 'intuitive epistemic probabilistic assessment'; but his 'intuitive assessments' cannot be assumed to be reliable or even semi-reliable except by accident.) What does your approach involve, if not assessment of actual principles, that distinguishes it from Ahriman's method?

    ReplyDelete
    Replies
    1. Brandon,


      I don't see that this is relevant. (1) I've been taking you to be making much stronger claim than saying merely that the challenge, if self-defeating, be self-defeating on different grounds than Richard suggested. Am I wrong in this? Are you in fact intending not to show that genetic debunking arguments can work, but only that Richard's objection and objections similar to it are not on their own able to raise worries about all of them?

      I actually intend to show they can work if the evolutionary premises (or similar ones) are given (but the premises like can be attacked on different grounds; I wasn't trying to defend them, though I was willing to do so), but in the dialectical context (i.e., the context of discussion of the matter) of Richard's objection, or similar ones, and generally of Richard's discussion of the matter.

      If you leave the context aside, then what it is to show something depends on whom one intends to show that something to. If the other person comes from a very different background, then the same sort of argument won't work. For example, a perfectly good proof of a mathematical theorem may rely on other theorems, but that may not show that the theorem is true to someone not familiar with the other theorems.

      On that note, you said ealier.

      Notice that you explicitly make the assumption that the moral sense is sharply separable from other human faculties so that putting it in doubt leaves the rest intact. And you are right to raise it in the context of self-defeat, because it is an immediate worry if we are talking about self-defeat. But all we have here to block the worry is an assumption

      I already pointed out that Richard's objection does not rely on such suspicion. But I will go further now and say that that sort of thing is not an immediate worry in this dialectical context - i.e., in the context of the discussion in this thread, and generally on this blog.

      If that were so, then for that matter, any moral error theory would be subject to that "immediate worry" of self-defeat (after all, a moral error theory most certainly puts in doubt the moral sense). In the context of the sort of debates we're having here, I don't expect to - and do not intend to - have to defend the claim that a moral error theory is not necessarily contradictory or otherwise self-defeating. That is taken as a given, unless a specific argument is given to the effect that moral error theories are faulty in that fashion, in which case the specific argument is to be discussed.

      Now, I admit it seems pretty improbable to me (based on what I know of error theories) that anyone, after assessing the matter on the basis of present-day theories, has good reasons to hold that moral error theories are suspect of self-defeat (leaving aside things like a a psychological defeat of sorts if humans are in practice not be able to refrain from sometimes making moral judgments, but that's not a defeat in the sense we're discussing), but that's not the sort of worry I expect to (or was trying to) address in this context.


      (2) Your AI argument also makes the argument, insofar as it addresses worries of self-defeat, rely on an epistemic principle that is at least controvertible, and which certainly cannot be derived merely from the scenario despite being essential for interpreting it one way or another. And the question I have been explicitly asking is how your approach is supposed to show that an argument is not self-defeating given that it does not involve looking at whether its principles are self-defeating. And here we see one case in which you seem to have to add in information about a principle in order to block one particular possibility of self-defeat.

      I don't know what principle you're talking about here. What is it?

      Delete

    2. This is incorrect. The question at hand is how your approach, using intuitive assessment of scenarios without looking at the actual principles of the arguments, establishes that there are genetic debunking arguments that avoid having self-defeating principles. I am not assuming anything about whether there is actually self-defeat or even a general worry about it; I am asking how you can establish your claims about the argument using the approach you use. How does this approach actually let one determine that an argument is not self-defeating? If someone were already worried that it is self-defeating (maybe that's their initial intuitive assessment of it), what does your argument actually do to eliminate that worry?

      I don't know if I understand you here. But for that matter, you seem to be asking me to show something that - among other things - would clearly show that moral error theories are not self-defeating against any potential self-defeat argument that might come across. If that's what you're asking, then there's been a misunderstanding: I don't think I have that burden in this context, and I wasn't trying to show that. Rather, I would take that the burden in this context is on the person who rejects an argument for an error theory (or a debunking argument, even if hypothetical) on the basis of a worry that it might be self-defeating. Why would it be so?

      If that person just intuitively reckons that it's self-defeating, then that's a case in which our intuitions on the matter are too far apart to have a conversation about it. Perhaps we could try to figure out how that came to happen (maybe discussing other parts of their framework), but that would take a very long time, and it's definitely not what I intended to do.

      For that matter, someone might go in the direction of Plantinga's EAAN and say that evolutionary debunking arguments are self-defeating because they rely on the premise of unguided evolution, and that's self-defeating. But that's also not the sort of concern I would try to address in this context.


      One question I keep coming to is something like this. Suppose you have a twin, Ahriman, who makes the same assessments that you do, but does so simply by guessing in ways that are not tied to whether the guesses are right or not. (He, of course, calls the guesses 'intuitive epistemic probabilistic assessment'; but his 'intuitive assessments' cannot be assumed to be reliable or even semi-reliable except by accident.) What does your approach involve, if not assessment of actual principles, that distinguishes it from Ahriman's method?

      "My" method is the human usual method as far as I know, and it seems clear that intuitive assessments are generally tied to the question of whether the "guesses" are right or wrong.

      Consider for example a judge of fact (a juror in much of the US system) who has to make an assessment about whether it's been established beyond a reasonable doubt that the defendant is guilty. That's all intuitive probabilistic assessments. The juror will have to leave aside scenarios like "aliens with sufficiently advanced tech frame the guy in such-and-such manner". Those are too improbable to raise reasonable doubts. But that's an intuitive probabilistic assessment. As humans do all the time, in ordinary life, and in important matters (like whether to send someone to prison, or go to war), etc.; that includes of course philosophy (e.g., see my exchange with Richard).
      But you know all of that, so I'm guessing maybe this is another misunderstanding?

      Based on your stipulation, it seems Ahriman seems to have an unreliable cognitive system.

      Delete
    3. If that were so, then for that matter, any moral error theory would be subject to that "immediate worry" of self-defeat (after all, a moral error theory most certainly puts in doubt the moral sense).

      Again, this is simply an incorrect interpretation of what I am saying. My specific questions were about how your approach deals with self-defeat; it is utterly incorrect to confuse a methodological question with an epistemological position. But this seems to be precisely one of the reasons for the problem in the discussion: you repeatedly talk about what I say as if I were the latter even when (as here) I explicitly point out that I am doing the former.

      Consider for example a judge of fact (a juror in much of the US system) who has to make an assessment about whether it's been established beyond a reasonable doubt that the defendant is guilty. That's all intuitive probabilistic assessments.

      It can't just be intuitive probabilistic assessment. Any jury that came to conclusions only on intuitive probabilistic assessments, where those are not clearly regimented and regularized by definite evidential principles and organized procedures and explicitly attested evidence, would be looney-tunes crazy. No reasonable person would set up a judicial system like that. And it is in terms of those evidential principles and explicitly attested evidence. But you keep trying to evade having any definite evidential principles, and the evidence is all literally made-up elements in a hypothetical scenario, so I have no clue whatsoever how you are taking warrant to work here. I see nothing that actually distinguishes your approach to these particular questions from Ahriman's, and every time I try to pin it down, you noticeably fail to tell me anything that would do the trick.

      Delete
    4. Again, this is simply an incorrect interpretation of what I am saying. My specific questions were about how your approach deals with self-defeat; it is utterly incorrect to confuse a methodological question with an epistemological position. But this seems to be precisely one of the reasons for the problem in the discussion: you repeatedly talk about what I say as if I were the latter even when (as here) I explicitly point out that I am doing the former.
      The part of my reply you're replying to is my reply to your earlier statement:

      Notice that you explicitly make the assumption that the moral sense is sharply separable from other human faculties so that putting it in doubt leaves the rest intact. And you are right to raise it in the context of self-defeat, because it is an immediate worry if we are talking about self-defeat. But all we have here to block the worry is an assumption

      That statement of yours clearly questions my reply on the basis that I only have an "assumption" that the moral sense is "sharply separable from the other human faculties so that putting it in doubt leaves the rest intact" as a means of blocking an allegedly "immediate worry if we're talking about self-defeat".
      Even if you explicitly point out that you intended only to raise a methodological worry but not an epistemological one, as a matter of fact and despite your intent, you're rejecting in this context my assumption of separability. You're saying there is some flaw in my reply due to my making that assumption. That is, you're saying my assumption is not enough, and I have to give something else. But of course, a moral error theory also more than puts in doubt our moral sense, and there is no worry - in this context - of self-defeat of moral error theories in general, so the separability is properly taken as a given in this context.

      It can't just be intuitive probabilistic assessment. Any jury that came to conclusions only on intuitive probabilistic assessments, where those are not clearly regimented and regularized by definite evidential principles and organized procedures and explicitly attested evidence, would be looney-tunes crazy. No reasonable person would set up a judicial system like that. And it is in terms of those evidential principles and explicitly attested evidence.

      They have to follow certain procedures to determine which pieces of evidence are to be counted, which is limited within certain bounds by the law. However, once the jurors factor in the pieces that count and the pieces that don't, on the basis on that they need to make an intuitive probabilistic assessment on the basis of those pieces of evidence, and decide whether or not the defendant is guilty. Theory is underdetermined by empirical evidence, so for any theory about what happened (e.g., the defendant wanted to steal the victim's money, so he shot her in the head), there are infinitely many alternative hypothesis that are not compatible with it, and that are also pairwise incompatible. The jurors have to assess that the probability of the hypothesis that the defendant did what he's accused of (which has a certain degree of detail, but it's also compatible with different more specific variants) is so high that it reaches "beyond a reasonable doubt" level. And that's done on intuitive basis, with no explicit guidelines, rules, principles, etc.

      Moreover, those principles, rules, etc., are reasonable as a means of reducing bias, but in order to come up with them, whoever made them had to make an intuitive probabilistic assessment that they would reduced bias, based on the evidence available to them.

      Delete
    5. But you keep trying to evade having any definite evidential principles, and the evidence is all literally made-up elements in a hypothetical scenario, so I have no clue whatsoever how you are taking warrant to work here.
      First, I'm not "trying to evate" having any definite evidential principles. I don't have any explicit principles that I know.

      Second, despite your disparaging characterization about what I'm doing, it's just the way one regularly makes assessments in philosophy discussions, at least in most cases.

      For example, I already gave a clear example of how that works. In fact, a similar example I gave earlier convinced Richard that in a color case (even though the other example was less clear because of the introduction of the demon, etc.; the dress example is better), genetic debunking argument can work, even though he disagreed with my assessment on the moral case, holding that a relevantly similar difference was whether the matter was a priori.

      So, I proceeded to test the proposed principle (namely, the principle that genetic debunking can't work against a priori true beliefs) by means of making intuitive probabilistic assessments in hypothetical scenarios.

      Now, I didn't convince Richard, but he did not object to the propriety of my methodology. In fact he said that "(But I agree this is a pretty wild-seeming commitment I've taken on, and that your example here does advance an intuitively strong case for the possibility of genetic debunking of a priori beliefs.)".

      Quite frankly, I'm baffled by the fact that you keep objecting to my approach - which is a run-of-the-mill, ordinary, orthodox approach in philosophy discussions - without accepting the burden of debunking the usual method in question (that is, my approach), and instead making all sorts of disparaging remarks. You're the one rejecting orthodoxy. You're the one rejecting a method that is what we usually do here. Yet, you give no good reason in support of your methodological skepticism - you keep making disparaging comments, and making parallels that do not hold up.

      I see nothing that actually distinguishes your approach to these particular questions from Ahriman's, and every time I try to pin it down, you noticeably fail to tell me anything that would do the trick.
      You stipulated that Ahriman makes assessments "simply by guessing in ways that are not tied to whether the guesses are right or not.", so you just rigged the scenario against Ahriman.
      By failing to see anything that actually distinguishes my aproach from Ahriman's method you failed to see that my assessments are tied to whether they're right or not, because they're intuitive probabilistic assessments made as humans usually made them, and our epistemic faculty is usually reasonably good at figuring out the truth. If you question that, then you are casting a much bigger doubt on our faculties than the moral error theorist - even if you don't intend to do that, and even though you keep telling me that you're not doing that.

      Delete
    6. Just to prevent a potential misunderstanding: You keep telling me that you're asking a methodological question, not an epistemic one. In the context of your replies, that clearly indicates you deny raising doubts about our regular epistemic faculty. But by challenging (and disparaging, dismissing, mocking, etc.), what you call "my" method, you're doing just that.

      That aside, I do not enjoy this sort of hostile exchange at all. I tried to end it long ago, trying to give you ways out that would not imply any concession on your part, but you keep coming back. What is your objective?
      I already attempted to explain my approach to you the best I could. I spend a lot of time doing so. I gave different examples - like the dress example -, but you keep raising objections that you think are significant (they just focus on the details and miss the central point, though. I made some mistakes in some of the details in some examples, but didn't in others, and in any case, the central point should be obvious by now), and keep coming back with disparaging comments.

      Alright, I get it: you think my approach is more than flawed, silly, contemptible, deserves mockery, etc. (you didn't say all of that explicitly, but you mocked it, expressed contempt, etc., so I do get what you think about it), and I will never persuade you otherwise. Again, I would like to ask what your goal is, at this point.

      Delete

    7. Quite frankly, I'm baffled by the fact that you keep objecting to my approach - which is a run-of-the-mill, ordinary, orthodox approach in philosophy discussions - without accepting the burden of debunking the usual method in question (that is, my approach), and instead making all sorts of disparaging remarks. You're the one rejecting orthodoxy. You're the one rejecting a method that is what we usually do here. Yet, you give no good reason in support of your methodological skepticism - you keep making disparaging comments, and making parallels that do not hold up.


      I have been out with grading, so apologies for the delay here. But this is precisely another point of the problem: I have already pointed out that your claim here is entirely wrong. Philosophers use intuitions about hypothetical scenarios (1) to begin inquiry into principles; (2) to use as crutches or illustrations for arguments developed independently; (3) to use as parts of arguments when supported by additional premises. We do not use them with the claim that on their own they establish things, because they are not reliable without further support; attempting to use them this way seems irrational and certainly needs to be justified. And I have noted already that common cases of hypothetical scenario use -- like Dennett-style 'intuition pumps' -- serve the uses I have pointed out. Likewise, this approach is not used in ordinary reasoning -- people don't typically solve day-to-day by making up bizarre fantasy and science fiction scenarios and reasoning about them instead of the immediate case at hand; even philosophers do it only because they have specific reasons for isolating specific facts-- which requires a broader context of justification.

      I have repeatedly pressed this methodological worry on you, and you have repeatedly attempted to evade it -- and evasion is the correct word by this point. What I find particularly bizarre is your repeated emphasis on 'intuitive probabilistic assessments', without further analysis, while repeatedly attempting to dismiss my 'intuitive probabilistic assessments' about your methodology, which I have back by additional arguments. Yet one more point in which your method seems to give your reasoning no rational consistency at all.

      Nor is it of any use getting snippy about my "disparaging comments". I have repeatedly pointed out that I don't understand what you are doing and asked you to explain. You have repeatedly chosen not to explain but to hide behind vague and dubious justifications I have already argued against.

      Delete

    8. I have been out with grading, so apologies for the delay here. But this is precisely another point of the problem: I have already pointed out that your claim here is entirely wrong.

      You have claimed that my claim is entirely wrong. I have pointed out that your claim that my claim is entirely wrong, is entirely wrong.
      My approach is a run of the mill approach - in fact, it's the most common one by far, perhaps alongside finding contradictions.
      When it comes to finding principles, of course they use intuitions. But it's not only to begin the inquiry. It's also to test the proposed principles. In other words, intuitions come first.

      As for the use in daily life, actually we make intuitive probabilistic assessments all the time. The objection that the scenarios are bizarre is a different one, not the one you have been pressing, as one can see from your reply to my jury analogy. On that note, I said:

      P1: "And again, you seem to oppose intuitive assessments to evidential analysis, whereas I'm talking about intuitive probabilistic assessments as a way of analyzing evidence in the first place. Indeed, that is how evidence is analyzed. How would you expect a judge of fact (whether a juror or a professional judge) to assess the evidence? They need to assess whether (for example) the probability that the defendant did it is so high that it's beyond a reasonable doubt. But theory is always underdetermined by observations. They need to assess the matter on the basis of their epistemic intuitions. And they do."

      And.

      P2: "Consider for example a judge of fact (a juror in much of the US system) who has to make an assessment about whether it's been established beyond a reasonable doubt that the defendant is guilty. That's all intuitive probabilistic assessments. The juror will have to leave aside scenarios like "aliens with sufficiently advanced tech frame the guy in such-and-such manner". Those are too improbable to raise reasonable doubts. But that's an intuitive probabilistic assessment. As humans do all the time, in ordinary life, and in important matters (like whether to send someone to prison, or go to war), etc.; that includes of course philosophy (e.g., see my exchange with Richard)."

      You interpreted my sentence "That's all intuitive probabilistic assessments" as ruling out that the probablistic assessments had to be based on certain pieces of information, but not all. I didn't mean to exclude that of course. But even assuming for the sake of the argument it was my mistake not to clarify that, the fact remains that the analogy holds, as it should be clear by now, and as the following description also shows:

      a. What you call "my" method consists in coming up with a hypothetical scenario and ask someone to make an intuitive probabilistic assessment on that basis. So, the information they have (in order to make the assessment) is the info I give in the description of the scenario, plus some background information that they have.
      b. A juror also has to make an intuitive probabilistic assessment. In this case, the information she has is what is presented to the jury according to some rules, plus her background evidence (she need background evidence, else she couldn't even understand some of the info given to her during the trial); some specific pieces of information might be excluded even if she has them.

      So, while a. and b. present different pieces of information, the central method in both cases is to make an intuitive probabilistic assessment on the basis of certain information.
      My analogy still holds.
      Now, what is different is that jurors face real - and hence realistic - scenarios, whereas some of the scenarios I presented are not of that kind. That's a serious objection, and I'll address it in the next posts.
      As for your accusations of evasion, they're false, as the exchange shows.

      Delete
  14. Brandon, you claim "I have repeatedly pointed out that I don't understand what you are doing and asked you to explain. You have repeatedly chosen not to explain but to hide behind vague and dubious justifications I have already argued against."
    That is a gross misrepresentation of what happened. Your false accusation that I chose not to explain and to hide behind "vague and dubious justifications I have already argued against" is another false disparaging comment. Of course - it is obvious by now - , you don't realize it's false, so I'm not accusing you of deliberately behaving in that manner.

    With regard to a substantive matter at hand, you raise the issue that the scenarios are bizzarre, etc., and for that reason our epistemic intuitions might not be reliable.
    I will point out that I came up with the demon scenarios in reply to a demon scenario that Richard brought up. Let's say that such scenarios are too bizzarre to be used in this context. Then, that works for both sides, and both sides' arguments are blocked.

    However, I've come up with other scenarios, like the the dress scenario which are far more realistic. If you think that's not realistic enough because in practice we wouldn't actually do it (too expensive to get all of those people), I will say there seems to be no good reason to even suspect that our intuitions wouldn't work in that scenario since what makes it unrealistic is a lack of money or willingness to use money in that way, and that does not seem likely to affect our epistemic intuitions. Of course, that is also an intuitive probabilistic assessments, as we need to do all the time.

    As for my original argument - or Street's, or similar arguments -, that was not a case in which the hypothetical scenario is bizarre. There is nothing bizarre about evolution producing things like that other planets. In fact, part of the argument is that given the information available to us, we should assess that that's likely to happen. Again, you might question that intuitive probabilistic assessment as well, and if so, I would say it would be bizarre to question those scenarios as bizarre, but either way, I will point out that that is not what we were discussing in this context.

    The question was whether if other things evolved, etc., in context, one should lower the credence of the sort of realism that Richard defends, or fixing the assumption that that sort of realism is true, decrease the credence in the reliability of our moral intuitions.
    Granted, I used some unrealistic scenarios to defend - for example - the claim that even a priori and true beliefs can be targeted by debunking arguments, given the right conditions. Assuming those fail, then I would still hold that the original scenario (namely, Street's claim that our moral intuitions would have been massively influenced by evolution, aliens would likely have different influences too), is realistic enough to be used in that context, and given that scenario, one should conclude that:

    P3: Very probably, our moral sense is not generally reliable, or
    P4: The sort of realism Richard defends (anything that falls under his definition of "realism", but I disagree with his definition) is very likely false.

    I would further add that P4 is far more probable than P3.

    Of course, I'm making intuitive probabilistic assessments here. If you say that they're not justified because of the alleged bizarreness of the situation, then that works for both sides, blocking also the conclusion that it's not the case we one should conclude P3 v P4. You may demand a principle. I don't have it - I think epistemic intuitions work fine in this case; I see no good reason to believe otherwise -, but then, for that matter, in that case, I may demand a principle from someone who denies that one should conclude P3 or P4.

    ReplyDelete
    Replies
    1. Brandon, this is a continuation of the previous post (i.e., the one that ends in "conclude P3 or P4.").

      Now, let me make this point clear: even if you disagree with my assessment that the original scenario (or the other alien scenarios) is not a bizarre one in which our epistemic intuitions are unreliable, that's a disagreement about who's right on this substantive matter. It shouldn't be a problem for you to understand what I'm doing, how I'm arguing, etc. I have explained to you - repeatedly, and in considerable detail - what I'm doing. The accusations that I'm hiding, choosing not to explain, etc., are wrong.

      I'm not doing any of the sort. I'm making what I'm doing very clear.

      Delete
    2. Just to avoid further misinterpretation: you did raise an objection to some of the scenarios on the basis of their bizarreness, and I acknowledged that earlier in our exchange.
      However, that was only an objection to the scenarios involving the demons (and at most relevantly similar ones, i.e., "extreme" cases) not the general objection to "my" method, nor what you said you found puzzling, or what gave you the impression that the argument looked like claiming that the lucky ticket was lucky, etc.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)