The Edge Annual Question.

malf

Member
Yep, machines could be a problem. They already are: people die in auto accidents every day. Could an "intelligent" machine be worse? Possibly.

~~ Paul
 
Yep, machines could be a problem. They already are: people die in auto accidents every day. Could an "intelligent" machine be worse? Possibly.

~~ Paul
sisko_facepalm.gif
 
Thanks for the link.
What do I think about machines that think?
I think machines don't think.
And I write it in pink ink,
'cause I am on the blink... :)
When I sort through a set of options and choose one of them I consider that I am "thinking".

Why is that not "thinking" when a machine does it? (A best route, or a chess move for example).
 
When I sort through a set of options and choose one of them I consider that I am "thinking".

Why is that not "thinking" when a machine does it? (A best route, or a chess move for example).
You are supposing tha thinking and calculating are exactly the same thing. Ask the chess software what does it think about his own thinking, like we're doing now... Also ask 10s of copies of the same software about the same question... As we're doing here :)
 
When I sort through a set of options and choose one of them I consider that I am "thinking".

Why is that not "thinking" when a machine does it? (A best route, or a chess move for example).

I agree you are thinking; and changing the entropy in the sphere of your personal environment, when you out-put selections. You can do it for two main reasons. You feel like it or you have measured the choices and activated the one highest in a value.

Computers do the latter extremely well, just like we do. They do not do the second, as far as we have observed them.
 
I liked Arnold Trehub's answer:

"Machines (humanly constructed artifacts) cannot think because no machine has a point of view; that is, a unique perspective on the worldly referents of its internal symbolic logic. We, as conscious cognitive observers, look at the output of so-called "thinking machines" and provide our own referents to the symbolic structures spouted by the machine. Of course, despite this limitation, such non-thinking machines have provided an extremely important adjunct to human thought."


David Gelernter, prof of compsci at Yale, had a good article about this:

The Closing of the Scientific Mind

But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:

1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.

2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.

3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.

4. Computers can be erased; minds cannot.

5. Computers can be made to operate precisely as we choose; minds cannot.

There are more. Come up with them yourself. It’s easy.
 
I liked Arnold Trehub's answer:
"Machines (humanly constructed artifacts) cannot think because no machine has a point of view; that is, a unique perspective on the worldly referents of its internal symbolic logic. We, as conscious cognitive observers, look at the output of so-called "thinking machines" and provide our own referents to the symbolic structures spouted by the machine. Of course, despite this limitation, such non-thinking machines have provided an extremely important adjunct to human thought."

David Gelernter, prof of compsci at Yale, had a good article about this:

The Closing of the Scientific Mind
In truth, there has been very little work gone into designing a mind that can think for itself. AI has been predominantly about designing hardware and software to achieve certain specific tasks much more accurately or quicker than human minds can.

Future endeavours may be more successful of course.
 
David Gelernter, prof of compsci at Yale, had a good article about this:

The Closing of the Scientific Mind
But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:

Let's have a look.

1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.

Some believe this will be possible. Even if it is not this isn't a killer for the analogy. There is plenty of PC software that doesn't work on my mac. Perhaps everyone's hardware is a unique "brand" (like a fingerprint). I personally doubt that the hardware and the software can be separated in the way that we currently think of a computer.

2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.

Really? That just seems wrong.

3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.

Throw in some randomness?

4. Computers can be erased; minds cannot.

Huh? Brain injuries? (Death? ;) )

5. Computers can be made to operate precisely as we choose; minds cannot.

Hypnotism? Advertising?
 
Last edited:
2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.
Though I'm wary of taking such an analogy too literally, this item seems fragile. Depending on how one chooses to interpret matters, there could be many counter-examples which might contradict this assertion. Not necessarily for you or me, but there are some examples of humans whose behaviour might be interpreted as the manifesting of a different mind within the same physical brain/body.
 
Though I'm wary of taking such an analogy too literally, this item seems fragile. Depending on how one chooses to interpret matters, there could be many counter-examples which might contradict this assertion. Not necessarily for you or me, but there are some examples of humans whose behaviour might be interpreted as the manifesting of a different mind within the same physical brain/body.

But that's actually yet another problem for materialism's fragile basis as well. Will look up the stuff by Parsetti and Braude about this and get back to you.
 
I liked Arnold Trehub's answer:
"Machines (humanly constructed artifacts) cannot think because no machine has a point of view; that is, a unique perspective on the worldly referents of its internal symbolic logic.

Well written assertion, that I believe is true. However - why can't a point-of-view program be written and run in the background. It could be effective and my guess that in some ways - it is already being addressed in AI.

The term PoV works because it implies a connection to both the inner and external environmental "meanings" of an agent. A computer agent can be programmed to react to personal danger (self-reference) and to react like a complex human. Just, it does not feel fear or aggression personally.
 
But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:
It is flawed because in a computer we can separate software and hardware, in the brain we can not.
To say that the mind is to brain as software is to computer, is building in a certain duality that is most likely not there.

Since the way the system of the mind/brain behaves depends on the very structure of the brain itself, through the ever varying way its neurons are connected, we can not separate mind and brain.
And, maybe to a lesser degree important, we can not even separate the brain from the rest of the body without changing the way the mind behaves.

So to say the analogy is flawed is easy, even if it is probably the best one we have at the moment. But i get the impression the author wants to load the analogy in a way that is not waranted.
He is setting up a straw man in making the distinction between mind an brain, and that gives a clue about where he is coming from.
1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.
But it may some day be possible to emulate a brain and transfer its behaviour to a machine.
2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.
It is very hard to know if this guy is talking about this 'in principal' or practically.
Because in principal we can run any computer program possible, given enough pencils, paper, erasers, and lifetimes. It would not be even close to practical, but still possible.

On the other hand, again in principle, a computer could run an emulation of a brain and be successful at it.

3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.
Is the first part of this statement even true?

Let us say we run an expert system based on neural network learning algorithms. For example one that has learnt to recognize handwriting. Reading of a state of the entire program will learn us nothing.
If we want to know what the program 'thinks', we will also have to 'ask' it by running the program and waiting for its answer.


4. Computers can be erased; minds cannot.
What ever does he mean? Is he hinting towards an eternal afterlife? That is the only way this statement can be true. And even if that is what he means, we have similar evidence for an eternal processor in the sky where deleted programs are run.

5. Computers can be made to operate precisely as we choose; minds cannot.
Not if we allow the outcome from previous actions to change the programming. Especially if these outcomes are based on interactions with the environment.

It is really starting to look as if the author is deliberately narrowing the definition of "computer" to make his point. A point that IMO is not about looking for useful analogies, but about the perceived dual nature of the mind.
 
A computer agent can be programmed to react to personal danger (self-reference) and to react like a complex human. Just, it does not feel fear or aggression personally.
This is still just an attempt at a superficial appearance. One would have to doubt how it could "react like a human" (complex or otherwise) if it lacks any human feelings.

In fact the only justification I can see for attempting such an endeavour would be to demonstrate its futility. But since we know its futility at the outset, even that would be going further than necessary.
 
Last edited:
One would have to doubt how it could "react like a human" (complex or otherwise) if it lacks any human feelings.
I strongly agree. This gets into the Turing Test.

Programming machines to have the context of self-reference and to act "as if" they have real emotions; is pragmatically useful for those trying to build and market personalized robots.
 
Back
Top