Wednesday, January 07, 2009

Intellectual Black Holes

The value of freedom sometimes calls for constraint: against selling oneself into slavery, for example. Is there an intellectual analogue of this? We generally value open-mindedness, and a willingness to 'follow the arguments' wherever they may lead. But what if some ideas were so corrupting that to consider them seriously would risk undermining one's future capacity for rational thought? It would then seem that there are some ideas that we must close our minds to, if we truly value open-mindedness (and not just for the present moment). This seems a curious possibility.

Such 'intellectual black holes' might fall into either of two classes:

(1) Psychological traps depend on contingent quirks of human cognitive architecture. These might (in principle) be completely arbitrary: we can imagine a creature whose head explodes if it comes to believe that cows eat grass. More realistically, it seems conceivable that some (even true) beliefs might interact to ill effect with our evolved heuristics, biases, and emotional dispositions.

(2) Rational traps, on the other hand, are general to any rational mind. This makes them more philosophically interesting, insofar as they reveal propositions that are essentially (and not merely contingently) antithetical to rational thought.

Now for the big question: are there any such 'intellectual black holes' -- ideas which, if accepted, would undermine one's rational capacities? (And if so, does that necessarily mean we are justified in believing the "trap proposition" to instead be false?)

Perhaps the most obvious example is epistemic nihilism: the view that there is no such thing as epistemic rationality -- all beliefs and arguments are equally good (or bad), and rational persuasion is impossible. Notice that if someone came to really, truly believe this, then it would be utterly impossible to reason them out of it: they would be incapable of treating anything you said as a reason worthy of consideration. Their mind could only be 'rescued' by some non-rational intervention: brain surgery, perhaps.

What other such cases can you think of?

22 comments:

  1. Radical skepticism comes to mind. If a person truly believed that there could be no access to confirmable truth, they would be unlikely to be persuaded by any experience or argument, since they can always assume that such are not apodictic and should be dismissed.

    ReplyDelete
  2. Also, there is a third class of trap that comes to mind: the existential trap. Perhaps there is some knowledge that with access to which the demise of thinking beings would become inevitable, not as a matter of functional idiosyncrasies (such as the psychological traps), but in a more literal sense.

    Many such candidates have been put forward in this category through theology and philosophy, everywhere from atheism to nuclear weapons to writing.

    Writing in particular raises the spectre of a subclass of the existential trap, which one might call the ratchet trap. Knowledge in any given individual usually progresses from possessing few schema of interpretation to possessing many. Do some of these acquired schemata foreclose irrevocably the possibility for their possessor to gain some other? Does writing, for example, destroy some thought-process native to pre-writing thought which is correspondent to some truth value that a writing schema could never simulate?

    ReplyDelete
  3. You kind of shift the goalposts in mid-argument. You start by asking, "[W]hat if some ideas were so corrupting that to consider them seriously would risk undermining one's future capacity for rational thought?"

    You end by asking, "[A]re there any such 'intellectual black holes' -- ideas which, if accepted, would undermine one's rational capacities?"

    Answering the second question in the affirmative does not at all answer the first question, and does not argue against open-mindedness.

    ReplyDelete
  4. You start by asking, "[W]hat if some ideas were so corrupting that to consider them seriously would risk undermining one's future capacity for rational thought?"

    You end by asking, "[A]re there any such 'intellectual black holes' -- ideas which, if accepted, would undermine one's rational capacities?"


    While they are certainly separate questions, I imagine they are not so far apart as they might seem. I've been reading Peter Watson's Ideas: A History, and one of the things that struck me most about the evolution of thought is the Pandora's Box quality of ideas. To wit, it seems that by someone at some point seriously considers an idea (and bothers to record or tell someone about it), it is a near certainty that that act guarantees either a contemporary or future person somewhere will accept that idea.

    This quality is enhanced by the utility of the idea, which by no means *necessarily* correlates with the truth-value of the idea. So, it is possible that an idea will in some context be useful, even as a pedagogical or rhetorical tool, and thus be disseminated widely, guaranteeing that eventually someone will take that idea seriously on its own terms. Could such an idea exist whose utility is so high that we become incapable of discarding it even after it is demonstrated that it obscures some truth-value or is itself alethically flawed?

    ReplyDelete
  5. Here's a kind of case that I think would fall under your psych trap heading.

    An idea that, in order to even be understood, requires such an investment of resources (time, effort, etc) that the thinker ends up being biased in favor of the idea in order to avoid admitting that their studies have been a waste of time.

    For the interest of avoiding controversy, I won't name any names. But I bet many would find inviting the suggestion that there are various philosophers for whom treating their works as the above sort of intellectual black hole is the best explanation for why they have so many adherents.

    ReplyDelete
  6. But I bet many would find inviting the suggestion that there are various philosophers for whom treating their works as the above sort of intellectual black hole is the best explanation for why they have so many adherents.

    As analogy, it is a similar problem to Popper's falsification schema. It is generally easier to create ad hoc alterations of a theory to fit available data than to admit it is falsified and find a new theory. Because of this ease, any challenge to theory devolves into a degenerate sunken cost problem.

    Likewise, in the psychological trap you describe, it is probably easier to tweak an idea which has been invested in than to discard it and find a new one.

    (P.S., OT - I'm sorry if it seems like I'm going off on a long-winded tear, but I find this topic fascinating. Let me know if it becomes annoying. :) )

    ReplyDelete
  7. Pete - indeed! Though that sounds like a more localized bias than the all-encompassing 'black holes' I had in mind. (Perhaps this is simply a difference of degree, though?)

    Barefoot Bum - I assumed that "seriously considering" an idea implied some (perhaps small) chance of "accepting" it. If you know in advance that some view (e.g. epistemic nihilism) is unacceptable, then it no longer seems possible to seriously consider it. So this does seem to suffice for my 'open-mindedness' point.

    Nevertheless, you are right to highlight that there is a second way in which "considering" an idea might be risky: namely, it might be intrinsically corrupting just to consider it seriously, independently of whether one comes to believe it or not. (I guess there is also the in-between case whereby considering an idea is necessarily corrupting but only because it is so tempting that anyone who seriously considers it inevitably ends up believing it. Jesse's talk of "pandora's box" could be seen as an interpersonal version of this: releasing an idea into the public culture inevitably leads someone to believe it.)

    ReplyDelete
  8. So I have two paradoxes in mind, but I'm not sure if they fall into your ideas of black holes.

    The first is the idea that perhaps words somehow only transfer ideas imperfectly and the imperfection is so great that it is dangerous to even try to communicate your ideas to other people because the risk and harm of misinterpretation leads to a perpetual back and forth of talking past each other. One could go farther and say that even when two people agree, they only think they are agreeing, but in reality the concepts they have in mind are different, they just can't realize it because they can't compare the concepts themselves, only the really similar-sounding arguments. Such a person may commit themselves to perpetual silence, for instance, and any arguing with them would be ignored. Perhaps not a true black hole...

    The other is similar to epistemic nihilism, except that someone has come to believe that thinking at all is worthless in that it actually doesn't reveal truth, it's just talking. These people would probably go out of their way to make people irrational or not think at all. They would be committed to not think about any argument presented to them. I guess it's a bit of an unsophisticated nihilism, but I thought it was worth a try, cause thinking is good lol.

    ReplyDelete
  9. There's the famous epitaph of the ancient logician Philetas of Cos: "the argument called The Liar, and deep cogitations by night, brought me to my death." Not that it is otherwise irrational to cogitate deeply on the semantic paradoxes...

    Jonathan Schooler has done some studies in which subjects' reflecting on why they have the preferences they have decreased their preferences for all of the options they were presented. (The options were car insurance plans, I think, but I believe he's had similar findings in other domains more recently.) It's plausible that this sort of reflection irrationally affects your preferences. There are also a bunch of studies (sorry - no citations) in which recalling some fact or event made subjects' memories of the event less accurate. And my undergraduate cog psych professor expressed the opinion that just considering any idea will raise its plausibility for you. If she's right, then it would be irrational, in a way, to consider any idea you know to be irrational.

    ReplyDelete
  10. Since, so far as I know, there is no reason to believe that the human brain is more powerful computationally than a standard Turing machine, (and if the Church-Turing Thesis is correct, it *can't* be more powerful) it is mathematically certain that there exist data inputs which will cause it to choke (fail to decide some arbitrary decision procedure catastrophically). For example, it is guaranteed to choke on the halting problem.

    A similar logical flaw that prevents a Turing machine from executing certain programs also guarantees the theoretical existence of a program that would cause an HCF operation, much like Hofstadter's example of the phonograph that, no matter how perfectly or finely made, is vulnerable to a record that when played on it causes vibrations that shakes the phonograph itself to pieces.

    The question remains as to whether in *practical* experience human minds can come into contact with inputs, structures, or ideas that could cause an unavoidable halt-and-catch-fire instruction in the brain.

    ReplyDelete
  11. If there are rational traps, I wouldn't be surprised if human minds had some psychological disposition to reject the 'trap propositions' or at least stop thinking about the proposition and its implications.

    ReplyDelete
  12. If there are rational traps, I wouldn't be surprised if human minds had some psychological disposition to reject the 'trap propositions' or at least stop thinking about the proposition and its implications.

    Humans are probably aided greatly in the filtering task by our sensory liminal barriers; the vast majority of things that could induce a mental state change if we could perceive them are too small/large/fast/slow/etc. for us to become directly aware of them, and the tools that we use to probe these realms of phenomena act as a sort of filter insofar as they are required to "translate" the information into a form we can sense.

    A psychological pass-blocker, on the other hand, might be useful, but ultimately it would be defeasible in the same manner as the system it is meant to protect. Any such filter, even if "perfectly" formed, would have to have a decision algorithm no more powerful than a Turing Machine, and so there is guaranteed to be some input which it can't decide.

    ReplyDelete
  13. Jesse, you seem confused about the implications of the halting problem. The mere inability to answer every possible question correctly does not imply "choking" in any broader ("catastrophic") sense.

    ReplyDelete
  14. The mere inability to answer every possible question correctly does not imply "choking" in any broader ("catastrophic") sense.

    True. I didn't mean to imply that undecidability was a sufficient condition for catastrophic failure.

    If the brain (or whatever other system) has no way to break out of a process (because it is undecidable within that system), it will execute it until something outside the system interrupts it, or something breaks. If you feed an undecidable proposition into a machine, and it has no way to "pop" out of the operation, it will theoretically continue to calculate forever (or until some practical component actually fails: that's what causes a halt-and-catch-fire in real world computers, a program causes a computer to access a specific memory address over and over until the chip overheats and melts).

    My point was in part that the halting problem demonstrates that there exist such questions, because it is one exemplar that at least theoretically holds for all such machines. If, bringing it back to your original question, such a proposition finds its way into the execution space of a critical system

    I agree that the mere inability to halt does not necessarily imply catastrophic failure; it just opens the possibility. The halting problem is special because it shows that the operation as to test whether there exists an undecidable proposition for a system is itself undecidable, and so any filters (of the manner that Neil indicated) are at best imperfectly capable of screening out any propositions that could cause the wider system to hang.

    Luckily enough, the human brain is situated inside of a larger system which could practically provide the mind with halting conditions for internally undecidable propositions. Perhaps the human body could "solve" the problem by arbitrarily halting any process which runs for a set duration, or produces a specific set of phenomenal occurrences (like neural excitotoxicity), much like a computer has a literal thermometer situated in key locations that shuts down the computer if it gets too hot. Critically, this requires intervention from outside the system.

    ReplyDelete
  15. No, really, you are completely confused. In particular, you are confusing the "mere inability to answer every possible question correctly" with "the mere inability to halt". These are obviously different, because one way to fail to answer a question correctly is to answer it incorrectly (rather than never halting at all).

    The undecidability of the halting problem merely means that there does not exist any general algorithm to determine whether each possible program-input pair will halt. That's all. It absolutely does not imply that every algorithm is itself susceptible to falling into an unhalting infinite loop.

    Anyway, this is way off-topic so I won't discuss it further here. But read the wiki article you linked. (It explains, e.g., that "Given a specific algorithm, one can often show that it must halt for any input".)

    ReplyDelete
  16. "Perhaps the most obvious example is epistemic nihilism: the view that there is no such thing as epistemic rationality -- all beliefs and arguments are equally good (or bad), and rational persuasion is impossible. Notice that if someone came to really, truly believe this, then it would be utterly impossible to reason them out of it: they would be incapable of treating anything you said as a reason worthy of consideration. Their mind could only be 'rescued' by some non-rational intervention: brain surgery, perhaps."

    If you want to read Plato's hilarious take on this, read his Euthydemus. Here the only possibility for "rescue" is deep Socratic irony.

    ReplyDelete
  17. Shubik's Dollar Auction strikes me as the kind of thing you have in mind for a rational trap. On most accounts of a good betting strategy, one is rationally compelled to bet one's entire life savings on a dollar.

    Intellectual black holes (like real black holes) strike me as carrying with them a serious break-down in theory. Where are you actually headed -- and what happens when after a finite time, you (in some ill-posed sense) arrive?

    ReplyDelete
  18. It's worth noting first-order predicate logic is semidecidable, which suggests no first-order statement can damage any possible rational mind merely by being considered.

    ReplyDelete
  19. How does it suggest that? (And are you talking about only ideally rational minds, or also, e.g., human psychology? It seems obviously possible that a human mind could be damaged by considering a proposition. Use your imagination.)

    ReplyDelete
  20. Determinism, when it is considered to be true of human cognitive processes is, I think, one of these intellectual black holes. It would necessitate that we believe not because of reasons, but because we have been caused to believe. Once this conclusion is accepted it undermines all rationality, including the reasoning that lead to the acceptance of the conclusion.

    ReplyDelete
  21. That's less clear. It could be that we are caused to believe for reasons. (So the inference from 'we have been caused to believe' to 'we believe not because of reasons' is a non-sequitur.)

    ReplyDelete
  22. Ideas and reasons etc. are only relevant in a cauasally deterministic framework insofar as
    they have physical effects, their "meanings" have no importance. If ideas and reasons are to be physically instantiated as brain states, for example, the physical quantities associated with these brain states are the only things that have effects. So even if reasons are causally relevant in this framework it is not because of what they mean but because of the physical quantities associated with them. This undercuts rationality in the ways I outlined in my previous post.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.