Arun: It is kind of obvious that no simulation by our computers - today or however advanced in the future - of QCD will produce quarks.
Why then is it philosophically acceptable to assume that the simulation of minds will produce minds?
Me: suppose a mad scientist replaced some of your neurons with synthetic parts that were functionally identical. They would thus make no difference to the overall functioning of your brain. You would still have all the same thoughts, feelings, etc. Suppose we kept up the replacement, a few at a time, until your entire brain is synthetic. Do you still have a mind? It seems obvious that you would. One explanation for this is that what matters for mentality is the information-processing, rather than the physical substrate in which it occurs. In other words: the mind is software, not hardware.
(I should note, though, that this thesis of Strong AI is philosophically controversial. So it's not quite right to claim that it's "philosophically acceptable to assume" it. Arguments are certainly called for.)
Arun: There are several fallacies here.
One is that little incremental changes can be extrapolated. It is entirely possible at some point in the process you've outlined, the mind degrades and finally vanishes.
Two is that even if "synthetically produced functionally identical parts" are used, that has something to do with software, simulation. I mean, I could replace all the parts in my Honda with imitation parts, and it still runs! Therefore "carness" resides not in the hardware?
Say, you replace neuron by neuron my neurons by stem-cell generated neurons (functionally identical synthetic parts). How does that prove that the mind is software?
If "mind is software" then a description of the algorithms it uses should not have to wait on hardware powerful enough to run those in reasonable time. Where are those? How far have we gotten with them?
This "mind is software" may be analogous to "DNA are character strings". Apart from encoding information however, DNA have beyond-the-reach of current simulation physical/chemical behavior that is essential to what it does. Its holding of information cannot be separated into "software" and "hardware".
Philosophy cannot answer these questions, it is a matter for **experimental** science.
Jared: Those aren't fallacies; they're disagreements.
Anyway you're on the right track. The argument goes that "mindness" is in the functionality of the parts. So if you've synthetic parts that function just the same, then we'd expect the same results. On the level of an entire brain, we have neither the technology nor skill to make it work. But when it comes to restoring lost vision or hearing due to brain damage, there has been some success.
Philosophy is only technical in thinking, and so allows us to think of how our increasing technologies can be used (cf. Heidegger, Identity and Difference). Analogously, experimental science does not answer, it shows. Hence you need to have philosophy to do a good part of the explication.
Arun: If the argument given is some kind of hand-waving plausibility argument, then yes, they're disagreements. If it an attempt at a logical presentation, they are fallacies.
I was merely going for hand-waving at the time, but it's an interesting issue, and one I know little about. Thoughts, anyone?