But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:
It is flawed because in a computer we can separate software and hardware, in the brain we can not.
To say that the mind is to brain as software is to computer, is building in a certain duality that is most likely not there.
Since the way the system of the mind/brain behaves depends on the very structure of the brain itself, through the ever varying way its neurons are connected, we can not separate mind and brain.
And, maybe to a lesser degree important, we can not even separate the brain from the rest of the body without changing the way the mind behaves.
So to say the analogy is flawed is easy, even if it is probably the best one we have at the moment. But i get the impression the author wants to load the analogy in a way that is not waranted.
He is setting up a straw man in making the distinction between mind an brain, and that gives a clue about where he is coming from.
1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.
But it may some day be possible to emulate a brain and transfer its behaviour to a machine.
2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.
It is very hard to know if this guy is talking about this 'in principal' or practically.
Because in principal we can run any computer program possible, given enough pencils, paper, erasers, and lifetimes. It would not be even close to practical, but still possible.
On the other hand, again in principle, a computer could run an emulation of a brain and be successful at it.
3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.
Is the first part of this statement even true?
Let us say we run an expert system based on neural network learning algorithms. For example one that has learnt to recognize handwriting. Reading of a state of the entire program will learn us nothing.
If we want to know what the program 'thinks', we will also have to 'ask' it by running the program and waiting for its answer.
4. Computers can be erased; minds cannot.
What ever does he mean? Is he hinting towards an eternal afterlife? That is the only way this statement can be true. And even if that is what he means, we have similar evidence for an eternal processor in the sky where deleted programs are run.
5. Computers can be made to operate precisely as we choose; minds cannot.
Not if we allow the outcome from previous actions to change the programming. Especially if these outcomes are based on interactions with the environment.
It is really starting to look as if the author is deliberately narrowing the definition of "computer" to make his point. A point that IMO is not about looking for useful analogies, but about the perceived dual nature of the mind.