tag:blogger.com,1999:blog-6642011.post9051968465609582371..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: On Parfit on Knowing What MattersRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-6642011.post-27792485203973734182018-03-04T08:58:08.433-05:002018-03-04T08:58:08.433-05:00A clarification point in re: quantum-generated swa...A clarification point in re: quantum-generated swampmen. I meant actual disagreement with them, were we to find them. <i>If</i> one assumes that hypothetical disagreement with quantum swampmen is already a challenge to our faculties, I would say that disagreement with actual swampmen would not add a non-negligible amount of evidence against our faculties. Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-63180571588498248142018-03-04T08:21:35.013-05:002018-03-04T08:21:35.013-05:00Hi Richard,
In re (1), I consider that a feature...Hi Richard, <br /><br />In re (1), I consider that a feature, not a bug. More precisely, I don't think that that's a reason to doubt the objection. On the contrary, I think disagreement with quantum-generated swampmen is not a good reason to doubt the reliability of our faculties, but rather, a good reason to doubt the reliability of the faculties of the specific quantum-generated swampman. The situation would be different if we were quantum-generated swampmen and we somehow knew that, but in this case, I also think the objection gets the right result. <br /><br />In re (2): While I do think Street's argument (or rather, a variant that avoids some of what I think are shortcomings in Street's formulation, but I'll say "Street's argument" for short) would raise a challenge to the reliability of some of our faculties <i>if</i> one insisted that there would be disagrement rather than miscommunication (but I don't think that that's the proper answer), actual disagreement would provide a piece of evidence that is independent of whether Street's succeeds on the basis of counterfactual considerations, and more evidence in case it does succeed. <br />More precisely, actual disagreement would give us further evidence that humans do not normally start with the right bedrock/starting points, either due to direct disagreement among humans, or - in the aliens variant - indirectly by providing further evidence that evolution does not generally produce species whose members start with the right bedrock/starting points. Thus, I think this would add more evidence to Street's argument, making it proper for us to lower the probability that we got a correct moral sense, regardless of how much we already lowered it (a little bit or a lot) due to Street's argument. Of course, this lowering is also conditioned to the event "there is disagreement rather than miscommunication", but Parfit believed so, so I think this would be a defense of his statement that is in line with his views - though as I mentioned, I don't know whether he would have defended along those lines. <br /><br />That said, Parfit said that ideal disagreements would be a <i>much</i> weaker challenge, rather than a weaker challenge. That "much" part is more difficult to defend, but then again, that's probably so because of my take on humans and aliens under ideal conditions, etc. I'm not sure what Parfit thought about it, but <i>if</i> Parfit expected that if future human philosophers met alien ones, there would be agreement (under ideal reflection, perhaps) on moral matters, this would explain why he reckons that the challenge would be much stronger. <br /><br />Also, I don't know what your take on that (i.e., aliens) is. Do you think they would very likely agree with humans on ideal reflection about morality? <br />In any case, I think that all other things equal, the higher the probability you assign to ideal agreement with aliens, I'd say the stronger the challenge from actual (alien) disagreement would be, relative to the challenge from hypothetical disagreement (and a similar point holds about human actual disagreement under ideal conditions and your present probabilistic assessment about human agreement under ideal conditions). Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-60119686598208597112018-03-04T07:02:25.736-05:002018-03-04T07:02:25.736-05:00Hi Angra, that's an interesting way to develop...Hi Angra, that's an interesting way to develop the argument from disagreement -- sort of collapsing it into Street's "moral lottery" objection. I'd just note a couple of reasons to be dubious of this: (1) it makes the epistemic relevance of disagreement depend on extrinsic factors: disagreement with quantum-generated swampmen, for example, would not seem relevant to assessing the reliability of our own faculties. (2) It's not clear that we really need the actual disagreement to get the worry off the ground (as Street's own argument shows).Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-49480154858509612262018-03-04T06:51:24.444-05:002018-03-04T06:51:24.444-05:00Thanks Brandon, that's a helpful way of develo...Thanks Brandon, that's a helpful way of developing the objection. One worry is that when you extend the criticism to one's entire (normative) belief <i>system</i>, the insistence on a non-question-begging defense becomes an excessively skeptical one. So I think there are (at least) two reasonable responses to that kind of systematic skeptical worry: (i) the standard coherentist / reflective-equilibrium style reply that we can legitimately assess small subsets of our beliefs in light of our other beliefs, whereas the system as a whole is not really up for dispute "all at once" like that; or (ii) that what matters is the intrinsic properties of the system -- such as whether it accurately reflects the facts about intrinsic credibility / self-evidence, and reliably yields true <i>a priori</i> beliefs about its domain -- not how one came by the system in question. The latter sort of view is the one I develop in depth in my paper.<br /><br />To support this, suppose a quantum fluke results in an intrinsic duplicate of Euler (or some other great mathematician) emerging from a bolt of lightning striking a swamp. It seems to me that Swamp-Euler is also a great mathematician, and can have all sorts of justified mathematical beliefs (perhaps after working through a proof on some conveniently generated swamp-papyrus), even though its a total fluke that his brain is set up to be so good at math. Although the swampman might just as easily have been created with an incompetent brain, <i>given that he actually has a highly competent brain</i> the results he gets are as reliable, justified, and generally epistemically meritorious as one could possibly want. (Whether he could <a href="http://www.philosophyetc.net/2005/10/know-show.html" rel="nofollow">demonstrate this to a skeptic's satisfaction</a> is a separate matter, but not one I take to necessarily undermine his actual epistemic status.)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-67054843235141994642018-03-02T22:27:34.631-05:002018-03-02T22:27:34.631-05:00Hi Richard,
In light of your view on the weight ...Hi Richard, <br /><br />In light of your view on the weight of disagreement, I don't think you'll find the following argument in support of Parfit's statement persuasive, but one way I think one might defend Parfit's statement is as follows (I don't know whether Parfit would have defended his statement in a similar manner, though).<br /><br />a. Actual disagreement among humans and for non-procedural reasons would provide good evidence that humans do not have a generally reliable moral sense/ or sense of whatever the disagreement is about. More specifically, it would be evidence that humans get the starting rules and/or values wrong very often. <br />b. You're also human, so the probability that you got the right moral starting points is not higher than the probability that some other human did, factoring in at this point the fact that there is such disagreement, and before you factor in other considerations like your reasoning from your own starting points with regard to rules and/or values. <br />c. Further evidence that you might get from reasoning on the basis of your starting points is not going to yield an increase in the probability that you got the initial values right. <br />d. As long as the people who disagree with you are also roughly as good as you are at the procedural stuff, you very probably will not find a way to raise the probability from a very low level. <br />e. Deep disagreement (or I'd say "disagreement") with advanced aliens would be (I think good) evidence that evolution generally does not get intelligent entities with the right initial values, as long as one insists on the further assumption that there would be disagreement rather than miscommunication with aliens (my actual take on this is that they would be talking past each other, or rather, that that would be so if they did not realize the mean different things, but civilizations advanced enough to make contact with each other and discuss stuff would very likely have figured that out, so they would acknowledge they're talking about different stuff, rather than talk past each other. But I'm assuming this is not for the sake of the argument). <br /><br />Granted, it might be objected that you may properly use your own assessments as good evidence that the people who disagree with you got the starting points wrong, because you assume your own starting points about morality to be right. Then, you would conclude (depending on whether the disagreement is with humans or aliens) that either even if evolution does not generally lead to having the right values, humans got lucky, or that if humans don't generally have the right starting points, you personally got lucky. I believe this is a mistake: while we can't "jump out of our heads" and assess the odds that all of our starting points be correct, we can properly assess a subset of them, such as moral starting points, and the lucky assessment is not right, and it would not be proper to dismiss evidence from unreliability due to the origins of the starting points. Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-24469300168207732342018-03-01T11:13:10.710-05:002018-03-01T11:13:10.710-05:00I think your response to the hypnotist example is ...I think your response to the hypnotist example is right, but I wonder if there might not be more that could be said on the other side. I've only had a chance to glance briefly at the Parfit response, so I'm not sure if this is strictly consistent with Parfit's own view, but I had the impression that part of the point was that he thinks of normative beliefs in something like a foundationalist way: they make up a system, depending on certain beliefs or, better, certain kinds of beliefs, so what really seems to be at issue for him is that the problem of justifiability in the incest example can be extended to any normative belief whatsoever -- but (he thinks) any attempt to justify any of those normative beliefs in terms of other beliefs is going to have to treat other normative beliefs as true. Thus the problem with the hynotist is not the bare fact of the causal story but the fact that the causal story includes indifference to the truth of the belief in question, and if this same kind of causal story can be given to all beliefs, then the whole mass of normative belief is itself in question, being something we only believe by (as it were) the flip of a coin and which we cannot justify except by assuming some part of it to be true. That is, the causal conditions don't give a reason to think beliefs of this kind justifiable, and the beliefs we are talking about can't be justified by anything else except other beliefs of the same kind.Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.com