tag:blogger.com,1999:blog-6642011.post6974098057308085754..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: Actual vs. Possible DisagreementRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-6642011.post-65572270866077313212010-12-22T19:05:44.025-05:002010-12-22T19:05:44.025-05:00Update: follow-up postUpdate: <a href="http://www.philosophyetc.net/2010/12/ideal-disagreement.html" rel="nofollow">follow-up post</a>Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-79109180452146685872010-12-19T12:58:20.096-05:002010-12-19T12:58:20.096-05:00Yes, it isn't exactly actual existence that ma...Yes, it isn't exactly actual existence that matters. Knowing the counterfactual would be equally good. The point is just that ordinary instances of actual disagreement are different from just imagining that some people disagree with you.<br /><br />In the case of reflective equilibrium, we certainly don't have experts like mathematicians or anything like that. But unless we have reason to think that others are less reliable than us, it seems like we should temper our confidence in cases of disagreement with those other people. (I'm not sure how much it should be tempered, but at least significantly.)Nick Becksteadhttps://www.blogger.com/profile/16561745593227211371noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-8312842121037812622010-12-17T15:49:05.100-05:002010-12-17T15:49:05.100-05:00Yeah, actual existence makes a difference when we ...Yeah, actual existence makes a difference when we have antecedent reason to consider the critics to be reliable in the relevant domain.<br /><br />Though even here, I take it, it isn't really their actual existence <i>per se</i> that is doing the work. Even if I merely knew that counterfactually, <i>were</i> there 100 previously reliable mathematicians, they <i>would</i> all disagree with this step of my proof, that would presumably be similarly undermining. (Or even, <i>randomly</i> choose 100 possible expert mathematicians, etc...)<br /><br />Back to Parfit's case: Given how reflective equilibrium works, there doesn't seem to be any reason to consider other (actually existing or not) people who start from different starting points to be reliable.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-81496309238698049932010-12-17T12:47:08.880-05:002010-12-17T12:47:08.880-05:00A non perception example:
Case 1: You produce a s...A non perception example:<br /><br />Case 1: You produce a sound proof for a difficult mathematical proposition. 100 actual professional mathematicians claim that you made an error, identifying the very same step in your reasoning.<br /><br />Case 2: You produce a sound proof for a difficult mathematical proposition. You imagine a possible scenario in which 100 professional mathematicians claim that you made an error, identifying the very same step in your reasoning.<br /><br />There seems to be a big difference between actual and merely possible disagreement. In the first case, you would have very good reasons to significantly decrease your confidence in your conclusion. In the second case, you would have essentially no reason to decrease your confidence.Nick Becksteadhttps://www.blogger.com/profile/16561745593227211371noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-79858170824601289482010-12-14T16:57:10.290-05:002010-12-14T16:57:10.290-05:00Hmm, yeah, that's probably the best interpreta...Hmm, yeah, that's probably the best interpretation. It does seem misguided though. (For one thing, I'm always suspicious when people lean too heavily on analogies between <a href="http://www.philosophyetc.net/2008/11/perceptual-and-rational-bias.html" rel="nofollow">perception and reasoning</a>!)<br /><br />We can tell a story about why actually existing thermometers are generally reliable (we designed them to be that way!). It's less clear why we should expect actually existing people to have true normative beliefs -- except insofar as they have a tendency to believe the sorts of normative claims I <i>already</i> take to be true: that pain is bad, etc.<br /><br />It seems especially problematic when we talk about reflective equilibrium, since Parfit seems to allow that the conclusions one would reach through the process of RE are <i>radically</i> dependent upon one's starting points. So it certainly isn't <i>generally</i> reliable. At best, it is reliable for those who start in roughly the right place. But again, what formal reason is there to think that <i>actually represented</i> alternative starting points are more likely to be right than those of merely possible agents? It would seem to depend entirely on the (expected) <i>content</i> of those starting points, which we can only assess for inherent plausibility from the standpoint of <i>our own</i> tendentious perspective.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-7331181570347142512010-12-13T22:04:13.227-05:002010-12-13T22:04:13.227-05:00In many cases, there's an important difference...In many cases, there's an important difference between merely possible evidence, and actual evidence. <br /><br />Suppose I know that thermometers tend to be reliable about the temperature, but are not perfect. My thermometer reads 70 degrees. Of course, I know that it's possible that there could be another externally indistinguishable thermometer in these circumstances that would read 75 degrees--the thermometers aren't perfect. But that possibility doesn't undermine my current high confidence that the temperature is close to 70 degrees.<br /><br />But seeing an actual thermometer that read 75 degrees would be quite different. In that case, I should be much less confident that the temperature is 70.<br /><br />If you think of people's beliefs as being like thermometers--i.e., as relatively reliable but imperfect and chancy indicators of the truth--then you'll think that learning that somebody actually disagrees with you is importantly different from learning that somebody could possibly have disagreed with you. <br /><br />More specific to the case at hand, if you thought that pursuing reflective equilibrium was a strategy that had a high chance of leading to true beliefs (but was not guaranteed to do so), then this analogy between beliefs and thermometers (and the corresponding distinction between the evidential significance of merely possible disagreement and that of actual disagreement) would look pretty good. You'd be worried if you found out about conflicting actual reflective equilibria, but you wouldn't be phased by conflicting merely possible reflective equilibria. I suspect, however, that you'll think that this is the wrong way of thinking of reflective equilibrium. But it may be what best makes sense of Parfit.Danielhttps://www.blogger.com/profile/17911116321628968901noreply@blogger.com