Sunday, October 14, 2007

Digital Minds

The old Scientism thread got a bit derailed by a side-discussion about the possibility of Strong Artificial Intelligence: the view that simulated minds could be real minds. To keep things tidy, I'm moving the comments here instead...
Arun: It is kind of obvious that no simulation by our computers - today or however advanced in the future - of QCD will produce quarks.

Why then is it philosophically acceptable to assume that the simulation of minds will produce minds?

---

Me: suppose a mad scientist replaced some of your neurons with synthetic parts that were functionally identical. They would thus make no difference to the overall functioning of your brain. You would still have all the same thoughts, feelings, etc. Suppose we kept up the replacement, a few at a time, until your entire brain is synthetic. Do you still have a mind? It seems obvious that you would. One explanation for this is that what matters for mentality is the information-processing, rather than the physical substrate in which it occurs. In other words: the mind is software, not hardware.

(I should note, though, that this thesis of Strong AI is philosophically controversial. So it's not quite right to claim that it's "philosophically acceptable to assume" it. Arguments are certainly called for.)

---

Arun: There are several fallacies here.

One is that little incremental changes can be extrapolated. It is entirely possible at some point in the process you've outlined, the mind degrades and finally vanishes.

Two is that even if "synthetically produced functionally identical parts" are used, that has something to do with software, simulation. I mean, I could replace all the parts in my Honda with imitation parts, and it still runs! Therefore "carness" resides not in the hardware?

Say, you replace neuron by neuron my neurons by stem-cell generated neurons (functionally identical synthetic parts). How does that prove that the mind is software?

If "mind is software" then a description of the algorithms it uses should not have to wait on hardware powerful enough to run those in reasonable time. Where are those? How far have we gotten with them?

This "mind is software" may be analogous to "DNA are character strings". Apart from encoding information however, DNA have beyond-the-reach of current simulation physical/chemical behavior that is essential to what it does. Its holding of information cannot be separated into "software" and "hardware".

Philosophy cannot answer these questions, it is a matter for **experimental** science.

---

Jared: Those aren't fallacies; they're disagreements.

Anyway you're on the right track. The argument goes that "mindness" is in the functionality of the parts. So if you've synthetic parts that function just the same, then we'd expect the same results. On the level of an entire brain, we have neither the technology nor skill to make it work. But when it comes to restoring lost vision or hearing due to brain damage, there has been some success.

Philosophy is only technical in thinking, and so allows us to think of how our increasing technologies can be used (cf. Heidegger, Identity and Difference). Analogously, experimental science does not answer, it shows. Hence you need to have philosophy to do a good part of the explication.

---

Arun: If the argument given is some kind of hand-waving plausibility argument, then yes, they're disagreements. If it an attempt at a logical presentation, they are fallacies.

I was merely going for hand-waving at the time, but it's an interesting issue, and one I know little about. Thoughts, anyone?

13 comments:

  1. Two thoughts.

    First, I've yet to hear an argument of the form "a simulation of X is not (or does not constitute, or whatever) X" that would hold for X = "information processing". The view that the mind/brain is an information processing device is of central importance in the sciences of the mind/brain; one might even call it the foundational assumption of cognitive science. To put it more bluntly, I don't know what sense can be made of the claim: "a simulation of information processing is not information processing". Maybe there's a story to be told about meta-information processing or something like that (maybe a Type Theory for information?), but I have yet to hear a coherent one that gets the desired result.

    Second, the view that "mind is software" is not a matter for scientific investigation, because we do not seem to have a clear idea of what software is. We know some things about the software we create, but what "in nature" could/should count as software in the first place seems to be a philosophical question.

    ReplyDelete
  2. "we do not seem to have a clear idea of what software is"

    That's an excellent point. For computers, software is something we give the component parts to do. For brains, it is not so easy to say that the brain is given a mind to "run".

    ReplyDelete
  3. ps: Arun, following Corey's observation, may be right that the brain's "holding of information cannot be separated into 'software' and 'hardware'". But I still don't know where we could draw the line between philosophy and experimental science here.

    ReplyDelete
  4. Richard,

    Suppose that the mad-scientist synthetically replicates your brain neuron for synthetic neuron. For each neuron you have in your biological brain and every information-y relation it stands in relative to other neurons, the mad-scientist has a synthetic neuron in her synthetic brain which stands in exactly those information-y relations to other synthetic neurons.

    First, is the mad-scientist's brain conscious? Second, is the mad-scientist's brain YOU being conscious?

    Now we start the replacement. We take a couple of your neurons and replace them by synthetic neurons, and replace the missing synthetic neurons in the mad-scientist's replicate with your biological neurons. We do this until the brain in your body is synthetic, and your biological brain is perfectly reconstructed in the mad-scientist's lab.

    Now the tricky question: is one of those information processors your information processor? When the mad-scientists zaps your dis-embodied brain so that it is no longer in synchronization with the synthetic brain in your head, which conscious facts are yours?

    I ask because while I expect that this cross-over process preserves consciousness, but I am not sure that it preserves identity.

    ReplyDelete
  5. Jack - right, personal identity is another issue. Still, you can probably predict my answer: You specified all the facts of the matter when you described the physical and psychological relations that hold between the various person-slices. There's nothing more to know, no 'further fact' about who is really you.

    Corey - that sounds right, but is there any room to question whether information processing is constitutive of mentality?

    ReplyDelete
  6. Richard,

    Yes, absolutely, and I think it gets tricky quite quickly. For example, cognition may be fully explainable in terms of information processing, but full-blown mentality may not (and I don't know how to get a story about consciousness off the ground). In any case, I think it's worth making those distinctions between, say, mentality, cognition, etc. philosophically, and tracking the ways in which scientific practice tracks those distinctions. Perhaps the science will inform our philosophical ideas, but I think it's more likely that it takes some philosophy to get clear about what the scientists are actually doing. So, we may not be happy with an account of so-called mentality that reduces to information processing, thus deciding (after doing some philosophy) that this isn't a theory of mentality at all, but cognition. Anyway, that's the picture I prefer.

    ReplyDelete
  7. "Arun: It is kind of obvious that no simulation by our computers - today or however advanced in the future - of QCD will produce quarks."

    - Dan Dennett has dealt with this issue. A simulation of a quark is not a quark. A simulation of a storm is not a storm - for example it cannot blow your house down.

    But a mind is an object with precisely the property that a simulated mind is identical to the mind itself.

    Other things have this property. For example, a simulated story is a story (although a simulated book is not a book!) A simulated philosophical argument is a philosophical argument. A simulated mathematical proof is a mathematical proof. A simulated game of chess is a game of chess.

    You philosophers must have a technical term for distinguishing between things that are physical objects - like a book - and things which are made of information, like a story or a mathematical proof. Could someone enlighten me as to what it is?

    Bottom line: Simulating a thing that was only made of information anyway leaves it unchanged.

    ReplyDelete
  8. Hey Richard, I have little thought experiment of my own on this issue. I call it Record/Replay Thought Experiment. Here it is:

    Let's say that we ended up replacing each neuron with synthetic neuron. And let's suppose that the consciousness is left intact.

    Let's suppose that those synthetic neurons are such that we can reset their state to the state they had at some time, and that we can save their input/output signals.

    Step 1:Let's save the state at certain time t1, and also save the inputs/outputs from time t1 to time t2 (including the inputs from outside - the sense data)

    Step 2: Let's reset the state of those neurons, and provide the same sense-data input... I think that we should suppose that this neural network will again be conscious, same as it was the first time

    Let's now give this assumption:
    If we have two neurons, so that N1 's output is connected to N2's input like this:

    N1------------N2

    no properties in the system are lost, if we add to this system a box between the N1 and N2, as long the output of N1 is same as the input of N2:

    N1------[box]-----N2

    Let us continue now...

    Step 3:Let's disconnect a neuron from the outputs of other neurons, and instead of this provide to it the saved inputs in the same timely manner as they would come from the disconnected inputs (we know already what they would be, as we are replaying the whole situation). We repeat Step 2. I guess we would expect that the neural network is still conscious. But now let's disconnect ALL synthetic neurons one by one and provide the saved input to them while the replay.

    As far as the set of the synthetic neurons is concerned, every neuron functions as it did before. One can imagine the change thus... instead of direct wire between neurons:

    N1----------------------N2

    we have now a small box in between (the box which replays the saved inputs)... so...

    N1------[replay box]----N2

    Seen from the outside for the limited time of the replay, the replay box doesn't affect the transmission.

    But what we actually have here is a set of disconnected neurons! Would this neural network be conscious? I don't THINK so.

    By reductio, the starting network of synthetic neurons can't be conscious. (either this or some of the properties needed for this thought experiment, like resetting or saving of the inputs are impossible)

    ReplyDelete
  9. I wrote a more precise argument (with pictures and everything :) ) here

    ReplyDelete
  10. Hi guys, interesting discussion!!

    Richard, in your replacement scenerio above, do you think it is fair to call what happens afterwards a simulation of the original mind? I would have thought that what you have after replacing all the nerons is still a mind, not a simulation of one...

    ReplyDelete
  11. Tanasije Gjorgoski said:
    "But what we actually have here is a set of disconnected neurons! Would this neural network be conscious? I don't THINK so."

    You are misleading yourself here because you have hidden all of the complexity of the network inside those replay boxes of yours. The neurons are actually no longer doing anything in your scenario, so you may as well get rid of them.

    What you are left with is a collection of replay boxes, each of which know what it's entire set of input neurons does for all times between t1 and t2.

    You have replaced a neural network with a relay-box network.

    ReplyDelete
  12. In fact it may help if I take Tanasije Gjorgoski's argument to it's logical extreme. A neural network may be replaced - with no loss of information - by an input-output table for each neuron, and a connection map.

    The connection map says which neurons inputs are connected to which other neuron's outputs.

    The input-output table for a neuron says whether or not it fires given what it's inputs are.

    Thus your mind (at some particular time t) is equivalent to a map and a collection of tables.

    But it gets worse than this. As far as someone talking to you is concerned, all that matters is what the input-output table for your mind as a whole is. So we can dispense with all of the input-output tables for individual neurons and the connection map, and instead just have an input-output table for the whole mind.

    Tanasije: do you agree that your mind is equivalent to a very long input-output table as I have specified?

    ReplyDelete
  13. Hi Roko,

    You say: "The neurons are actually no longer doing anything in your scenario, so you may as well get rid of them."

    Each neuron is working as in the original scenario. Everything about it is same. Everything about each neuron in the scenario is same.

    But yes, I would agree that the neuron now isn't really doing anything. In a previous post on this issue, I said the same thing... we can as well turn off the neurons from scenario step 3, and go home. But I think that saying this requires more exact formulation of what "not doing anything means", and how this might relate to presence or absence of consciousness.

    As for your question of replacement, in the post on my blog I added more explanation. The thing is that systems are characterized by dispositional properties, and I don't think that even the network of artificial neurons can be changed with simple input/output tables (at least not finite ones).

    But consciousness is not disposition, but occurrent property of the system, so that is what I use together with replaying in order to get to (what I think is) contradiction.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.