A robot prepared for self-awareness

I'm just saying. I don't have an allegiance to the idea or anything. But perhaps it is not impossible? Nature is still capable of surprising us.
Oh, okay. I didn't think you were fond of the "as long as it's possible, it's worth consideration" approach.

Linda
 
Oh, okay. I didn't think you were fond of the "as long as it's possible, it's worth consideration" approach.

Linda

Well, I'd have to concede that I'm not...as a rule. But it is a bit of a catch 22. New discovery *might* lie in considering something at present insufficiently considered. And yes, there's always the risk that we'd be entirely wasting our time.
 
I know a lot of people here think it's very unlikely an artificial brain could exhibit ESP, but I can't see that there would be any kind of logical contradiction involved in that, as nbtruthman seemed to be implying.

I also don't think that (as Hurmanetar's comment suggests) a lot of people have really thought through the implications of the "brain as a transceiver" concept. Namely, that an artificial physical system - whether composed of biological or non-biological components - could be conscious in just the same way that we are, if it was capable of simulating the relevant transception processes. How would that be different in practice from the abhorred idea of a conscious artificial intelligence?

One framework for an explanation of psi, ESP and afterlife evidence is the notion that consciousness is really a property of souls, which express themselves in and interact with the physical world through physical bodies and brains. If so, an artificial system emulating brain data processing would by itself not experience consciousness (or anything at all). No matter how well it emulated consciousness in the Turing test, there would be nobody home inside - it would still be just a very complex mechanism in action. Unless, that is, the souls decided for some strange reason to try to inhabit and express through the artificial mechanism, and were successful. Maybe other, nonhuman entities could also decide to come into the physical this way. The consequences would be impossible to predict, and could be calamitous to our civilization. Sounds like an interesting new sub-genre of science fiction or science fantasy.
 
One framework for an explanation of psi, ESP and afterlife evidence is the notion that consciousness is really a property of souls, which express themselves in and interact with the physical world through physical bodies and brains. If so, an artificial system emulating brain data processing would by itself not experience consciousness (or anything at all). No matter how well it emulated consciousness in the Turing test, there would be nobody home inside - it would still be just a very complex mechanism in action. Unless, that is, the souls decided for some strange reason to try to inhabit and express through the artificial mechanism, and were successful. Maybe other, nonhuman entities could also decide to come into the physical this way. The consequences would be impossible to predict, and could be calamitous to our civilization. Sounds like an interesting new sub-genre of science fiction or science fantasy.

So you're saying that consciousness resides in the soul, and the soul has a choice whether to inhabit the body/brain, and you think souls would choose not to inhabit artificial bodies/brains (why?), and therefore they would just be unconscious machines emulating consciousness, but with "nobody home".

A very obvious question there is what would happen if the souls chose not to inhabit a human body/brain. What if there were no takers? Presumably the same would apply. Given that we can know nothing of the criteria souls would use in making their choice, how could we be sure that a substantial proportion of the human race weren't zombies, just emulating consciousness?

Again, I don't get the impression that the implications of this viewpoint have really been thought out.
 
A very obvious question there is what would happen if the souls chose not to inhabit a human body/brain.

I've heard the idea put forward (perhaps not entirely seriously) that, given the current world population, there aren't enough souls to go around, and that's why there seem to be so many people with no conscience. :)

Pat
 
So you're saying that consciousness resides in the soul, and the soul has a choice whether to inhabit the body/brain, and you think souls would choose not to inhabit artificial bodies/brains (why?), and therefore they would just be unconscious machines emulating consciousness, but with "nobody home".

A very obvious question there is what would happen if the souls chose not to inhabit a human body/brain. What if there were no takers? Presumably the same would apply. Given that we can know nothing of the criteria souls would use in making their choice, how could we be sure that a substantial proportion of the human race weren't zombies, just emulating consciousness?

Again, I don't get the impression that the implications of this viewpoint have really been thought out.

I guess we really can't be sure. That's one of the intriguing possibilities opened up by this hypothesis. There might be evidence of this in several known phenomena: the existence of many sociopaths and psychopaths with no human empathy (soulless zombies?), and the frequent incidence of unexplained sudden infant death syndrome and stillbirths ("no takers" cases where the souls decided to intervene to prevent the creation of zombies?). Of course, if the dualistic soul habitation hypothesis is wrong, we are left with other possible explanations for psi, ESP and afterlife evidence that also have big problems.
 
I guess we really can't be sure. That's one of the intriguing possibilities opened up by this hypothesis. There might be evidence of this in several known phenomena: the existence of many sociopaths and psychopaths with no human empathy (soulless zombies?), and the frequent incidence of unexplained sudden infant death syndrome and stillbirths ("no takers" cases where the souls decided to intervene to prevent the creation of zombies?). Of course, if the dualistic soul habitation hypothesis is wrong, we are left with other possible explanations for psi, ESP and afterlife evidence that also have big problems.

And the only way of telling whether someone is a zombie is to test whether they have psi abilities?
 
I really don't understand the seemingly ubiquitous proponent position that consciousness cannot occur within a machine. After all, isn't that the claim about what the human brain is? A "corollary" machine that is not capable of producing consciousness in and of itself? It seems contradictory to me. What is so magical about the brain that it can interface with consciousness? Nothing, right? It's consciousness that's doing all the work, not the scrappy, physical-only meat pile in your head. Right? So then why is it so hard to entertain the idea that consciousness might also pass through or enter a machine that is set up to mimic what a brain does? The only argument against the concept of a sentient machine that I've found remotely interesting is from Kai, but of course he doesn't believe that consciousness is separate from the brain (please correct me if I'm wrong Kai).
 
I think we have to explore the core idea that we can explain consciousness by creating something that mimics it to some degree.

Does this make sense?

There are programs that mimic mental arithmetic - both the time it takes as a function of complexity, and various common mistakes.

You can download a chess playing program from the internet. Never mind the likes of Deep Blue, my PC equipped with one of these will beat most people - including me.

These are a variety of software systems that can do algebra and calculus.

There are lots of simple pieces of software that help us with tasks like fixing spelling or syntax errors, translating between languages (clumsily), searching for information, etc.

Even very elementary programs - such as one that totals columns of figures - performs a task that used to be done by a conscious human!

A robot can ride a bike:
(worth viewing).

Even an old-fashioned doll can pee like a baby and emit a cry!

Are some or all of these conscious simply by virtue of the fact that they overlap human capabilities to some degree?

Let's take the most trivial example on that list - because if this way of arguing is valid for the more advanced examples, it surely should be true (maybe to a lesser degree) for the most trivial. Is it really true that dolls that cry and wet themselves tell us something about the consciousness of babies?

Maybe one way to explore this idea is to turn it round. If we didn't understand anything about computers, would we find out by exploring human tasks that could have been run by computer.

Imagine trying to learn how computers work by watching people add up numbers, perform translations, ride a bike, type a document, etc.

David
 
I think we have to explore the core idea that we can explain consciousness by creating something that mimics it to some degree.

Does this make sense?

No. Not if you believe the brain does not produce consciousness. Neither does the machine in this case. All it needs to do is channel it, as in the Brain/TV analogy. Though I agree it is an interesting approach to mimic something to better understand it.
 
No. Not if you believe the brain does not produce consciousness. Neither does the machine in this case. All it needs to do is channel it, as in the Brain/TV analogy. Though I agree it is an interesting approach to mimic something to better understand it.
Well obviously I am inclined to believe that the Mars rover analogy is a better model of consciousness (I say Mars rover because we really need bi-directional information transfer of course). However, these AI efforts aren't designed to create some form of consciousness filter, they aim to create consciousness as such. You can't think of the machine as being explicable by either model - I don't think it is!

That is not to say, someone might find a way to make a machine that did act as a consciousness filter - a fascinating, if somewhat scary prospect - but building a machine implies putting parts together to correspond with some theory - if it works, it validates that theory, it can't work any other way.

Real AI would, I think, disprove the filter model of the brain

I think my concern nowadays, is that we can bring such vast amounts of computation and communication to bear on problems, that we can end up with pseudo AI - which might confuse the issue - something that looks intelligent just because you don't see how it is achieved - a Sat-nav would be an example.

Take chess. People like to play this game because it is competitive, it seems to stretch their cognitive abilities, you can join chess clubs, etc etc. Now think about a chess program. It has been built out of the experiences of players who have studied the game deeply, and it ploughs through the "if he does that, then I can do this" scenarios with enormous speed, but it sure as hell doesn't enjoy the experience, or experience satisfaction at beating an opponent, or enjoy the social aspects of the game, or write books on the subject!

Some people like going to the gym, but building a machine that could lift the weights for us would be pointless, and wouldn't tell us anything about humans!

I have written software for a living for many years after I gave up science as such, and I am struck by the incredible difference between contrived mental tasks like chess, and real mental tasks like software production - or indeed fixing a car, or building a house. We apply consciousness to everything we do. If someone changed the rules of chess, we could at least have a go at playing the new game - but try that with a chess program!

Let me take another example from elementary calculus. Suppose we learn to perform differentiation:

Differentiate with respect to x:

(a x^2 + b x) to get (2a x + b)

You definitely can devise a set of rules to perform differentiation, and create a program out of them. The results can be quite spectacular because you can use such a program to differentiate horrendously complicated expressions. However, does that constitute AI?

Unlike chess, this isn't an arbitrary set of rules, people devised them for a purpose, and most students (maybe not all!) do glimpse the reason for those rules, and the purpose of the whole concept. Try teaching the concept of differentiation to a computer - that is a vastly different matter!

In other words, it is possible to cherry pick part of a problem that can be mechanised, but forget that humans who use that skill, use it in a context that is probably impossible to mechanise. However, it is awfully easy to forget that and imagine the computer with a magnified version of the human skill.

David
 
Last edited:
That is not to say, someone might find a way to make a machine that did act as a consciousness filter - a fascinating, if somewhat scary prospect - but building a machine implies putting parts together to correspond with some theory - if it works, it validates that theory, it can't work any other way.

Real AI would, I think, disprove the filter model of the brain.

But either way - whether the brain is producing consciousness itself, or whether it's only acting as a filter - there's no reason in principle why a machine couldn't do the same, is there?
 
But either way - whether the brain is producing consciousness itself, or whether it's only acting as a filter - there's no reason in principle why a machine couldn't do the same, is there?
Well maybe I should re-phrase that remark:

Real AI achieved without creating some sort of consciousness filter would, I think, disprove the filter model of the brain.

You need to realise that AI folk aren't thinking about consciousness filters - they are just trying to make gadgets!

David
 
David

Yes, I agree, that's not what they are trying to do, and any ideas about the physical mechanism of the "consciousness filter" are so speculative that simulating it directly isn't on the agenda in the short- to medium-term. (On the other hand, if one did believe in the filter idea, maybe the filtering mechanism could just be some kind of hugely complicated neural net - in which case attempts to make gadgets might stumble across it without trying to.)

But it just strikes me (and maybe others too) that in some quarters there's a determination that no manner of artificial entity could reproduce consciousness in the same way the human brain/body does. And it seems to me the filter idea implies the opposite - it implies that in principle an artificial entity could simulate the filtering mechanism, and therefore fulfil just the same function in relation to consciousness that the human brain/body does.
 
David

Yes, I agree, that's not what they are trying to do, and any ideas about the physical mechanism of the "consciousness filter" are so speculative that simulating it directly isn't on the agenda in the short- to medium-term. (On the other hand, if one did believe in the filter idea, maybe the filtering mechanism could just be some kind of hugely complicated neural net - in which case attempts to make gadgets might stumble across it without trying to.)

Well if you mean artificial neural nets, these are basically computational, and are often simulated by software. If you are talking about a machine with some gray matter inside it or some hardware specifically designed using a theory that does not yet exist, maybe that could provide the filter link!

But it just strikes me (and maybe others too) that in some quarters there's a determination that no manner of artificial entity could reproduce consciousness in the same way the human brain/body does. And it seems to me the filter idea implies the opposite - it implies that in principle an artificial entity could simulate the filtering mechanism, and therefore fulfil just the same function in relation to consciousness that the human brain/body does.

I agree - there is a reasonable chance that at some time in the future this could be achieved, but obviously it would need some theoretical input as to how the filtering process could take place. People often muse over the idea that AI might become dangerous. BTW, I think it is far more likely that an AI-by filtering machine (FAI) might be dangerous - who knows what it might contact! I guess you might even persuade a dead person to 'occupy' the machine - at least for a while!

It is possible to conceive that the link from consciousness to machine could be possible by biassing quantum transitions, but I can't begin to see what might feed the data back again - or is it all done by quantum observation?

My frustration is that all considerations of this sort are normally swept off the table, and AI is seen as purely algorithmic.

I think what you actually encounter, is people who simply don't consider the FAI option, who therefore state that AI can't reproduce consciousness.

David
 
David

I would only say that if were is a filtering process (which I don't really believe in, so my opinion may be worth nothing) I don't think there would be any reason to discount things that are "computational" or things that can be simulated by software.

I suspect that whatever grey matter can do can - in principle - be simulated by sufficiently sophisticated software.
 
Suppose medical technology advanced to the point where biological neurons could be replaced by artificial ones. If someone's neurons were replaced, one by one, at what point would the person no longer have consciousness?

Pat
 
You need to realise that AI folk aren't thinking about consciousness filters - they are just trying to make gadgets!

It is possible that certain technical advances give rise to a filter of consciousness, although its developers do not have thought so.
 
It is possible that certain technical advances give rise to a filter of consciousness, although its developers do not have thought so.
I don't really accept that. it is one thing to say that the brain has hidden communicative powers within itself, but quite another to attribute such things to constructed artefacts, where a group of people can tell you exactly how they work.

David
 
I don't really accept that. it is one thing to say that the brain has hidden communicative powers within itself, but quite another to attribute such things to constructed artefacts, where a group of people can tell you exactly how they work.

I find this very difficult to understand. Do you mean that the filtering mechanism is in principle impossible to understand? It almost sounds as though you mean that if we understood it, it would stop working!
 
Back
Top