A robot prepared for self-awareness

A year ago, researchers at Bielefeld University showed that their software endowed the walking robot Hector with a simple form of consciousness. Their new research goes one step forward: they have now developed a software architecture that could enable Hector to see himself as others see him. "With this, he would have reflexive consciousness," explains Dr. Holk Cruse, professor at the Cluster of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University. The architecture is based on artificial neural networks. Together with colleague Dr. Malte Schilling, Prof. Dr. Cruse published this new study in the online collection Open MIND, a volume from the Mind-Group, which is a group of philosophers and other scientists studying the mind, consciousness, and cognition.


Both biologists are involved in further developing and enhancing walking robot Hector's software. The robot is modelled after a stick insect. How Hector walks and deals with obstacles in its path were first demonstrated at the end of 2014. Next, Hector's extended software will now be tested using a computer simulation. "What works in the computer simulation must then, in a second phase, be transferred over to the robot and tested on it," explains Cruse. Drs. Schilling and Cruse are investigating to what extent various higher level mental states, for example aspects of consciousness, may develop in Hector with this software – even though these traits were not specifically built in to the robot beforehand. The researchers speak of "emergent" abilities, that is, capabilities that suddenly appear or emerge.

Until now, Hector has been a reactive system. It reacts to stimuli in its surroundings. Thanks to the software program "Walknet," Hector can walk with an insect-like gait, and another program called "Navinet" may enable the robot to find a path to a distant target. Both researchers have also developed the software expansion programme "reaCog." This software is activated in instances when both of the other programmes are unable to solve a given problem. This new expanded software enables the robot to simulate "imagined behaviour" that may solve the problem: first, it looks for new solutions and evaluates whether this action makes sense, instead of just automatically

completing a pre-determined operation. Being able to perform imagined actions is a central characteristic of a simple form of consciousness.

In their previous research, both CITEC researchers had already determined that Hector's control system could adopt a number of higher-level mental states. "Intentions, for instance, can be found in the system," explains Malte Schilling. These "inner mental states," such as intentions, make goal-directed behaviour possible, which for example may direct the robot to a certain location (like a charging station). The researchers have also identified how properties of emotions may show up in the system. "Emotions can be read from behaviour. For example, a person who is happy takes more risks and makes decisions faster than someone who is anxious," says Holk Cruse. This behaviour could also be implemented in the control model reaCog: "Depending on its inner mental state, the system may adopt quick, but risky solutions, and at other times, it may take its time to search for a safer solution."


To examine which forms of consciousness are present in Hector, the researchers rely on psychological and neurobiological definitions in particular. As Holk Cruse explains, "A human possesses reflexive consciousness when he not only can perceive what he experiences, but also has the ability to experience that he is experiencing something. Reflexive consciousness thus exists if a human or a technical system can see itself "from outside of itself," so to speak."

In their new research, Cruse and Schilling show a way in which reflexive consciousness could emerge. "With the new software, Hector could observe its inner mental state – to a certain extent, its moods – and direct its actions using this information," says Malte Schilling. "What makes this unique, however, is that with our software expansion, the basic faculties are prepared so that Hector may also be able to assess the mental state of others. It may be able to sense other people's intentions or expectations and act accordingly." Dr. Cruse explains further, "the robot may then be able to "think": what does this subject expect from me? And then it can orient its actions accordingly."

Cruse and Schilling's study is part of the online publication "Open MIND." This approximately 2,000-page collection marks the 10-year anniversary of the MIND-Group and contains 39 original articles from scientists in the fields of philosophy, psychology, and neurological research. Dr. Thomas Metzinger of the University of Mainz is the initiator and co-editor of the volume. The collection can be accessed online for free at
I wonder what future generations of this robot will be able to do?
 
  • Like
Reactions: fls
I wonder what future generations of this robot will be able to do?

This is the original paper: http://open-mind.net/papers/mental-states-as-emergent-properties-from-walking-to-consciousness .

" In contrast to most related approaches, we do not take consciousness as our point of departure, but rather aim, firstly, to construct a system that has basic properties of a reactive system. In a second step, this system will be expanded and will gain cognitive properties in the sense of being able to plan ahead. ....The idea behind this approach is that higher-level properties may arise as emergent properties, i.e., may occur without requiring explicit implementation of the phenomenon under examination but instead arise from the cooperation of lower-level elements."

This is the old idea that if you just can't understand the nature of consciousness so as to actually design a conscious AI system, then you might be able to develop a robotic system that mimics the evolution of primitive animal nervous systems and bodies (beginning with insects in this case) in such a way that properties of consciousness hopefully start to appear by themselves. This is assumed to be through a process of emergence in a complex stimulus/response mechanical system that develops internal simulations of its environment, predicts future interactions, and learns from its experience to reach for various internally-directed goals through the use of artificial neural networks. In other words, some degree of consciousness and self-awareness is supposed to "emerge" automatically in these artificial mechanical organisms through the development of certain aspects of cognition, all without the designers of the system actually having to understand consciousness and self-awareness. The designers assume that primitive cognition can amount to primitive consciousness.

All this comprises the fallacy of "emergentism". It's a fallacy because mind, consciousness and self-awareness belong to a fundamentally different level of existence than physical "things" and their interactions. Only physical things and their interactions can emerge from the interactions of things. The level of "things" and their interactions includes all neurons and their synaptic biological and chemical interactions.

From a good source on this (http://ncu9nc.blogspot.com/2013/08/consciousness-cannot-be-emergent.html):
"Subjective experience, which cannot be measured objectively, cannot be the product of fundamentally different objective measurable phenomena such as neuronal activity in the brain. If you study a lump of brain cells, neither the laws of physics nor any biochemical reactions can explain why subjective experiences feel the way they do. Subjective experiences are known only in terms of subjective experience, not in terms of mathematics, or molecular models, or physics, or chemistry, or biology, or psychology, or sociology. Red looks red. Physics can tell you what wavelengths of light look red, and chemistry can tell you how light is sensed by the retina, and neurology can tell you how the signals from the optic nerve are processed by the brain, but none of that will ever tell a colorblind person what red looks like. Consciousness and physical processes are fundamentally different things."
 
This is the original paper:

" In contrast to most related approaches, we do not take consciousness as our point of departure, but rather aim, firstly, to construct a system that has basic properties of a reactive system. In a second step, this system will be expanded and will gain cognitive properties in the sense of being able to plan ahead. ....The idea behind this approach is that higher-level properties may arise as emergent properties, i.e., may occur without requiring explicit implementation of the phenomenon under examination but instead arise from the cooperation of lower-level elements."

This is the old idea that if you just can't understand the nature of consciousness so as to actually design a conscious AI system, then you might be able to develop a robotic system that mimics the evolution of primitive animal nervous systems and bodies (beginning with insects in this case) in such a way that properties of consciousness hopefully start to appear by themselves. This is assumed to be through a process of emergence in a complex stimulus/response mechanical system that develops internal simulations of its environment, predicts future interactions, and learns from its experience to reach for various internally-directed goals through the use of artificial neural networks. In other words, some degree of consciousness and self-awareness is supposed to "emerge" automatically in these artificial mechanical organisms through the development of certain aspects of cognition, all without the designers of the system actually having to understand consciousness and self-awareness. The designers assume that primitive cognition can amount to primitive consciousness.

All this comprises the fallacy of "emergentism". It's a fallacy because mind, consciousness and self-awareness belong to a fundamentally different level of existence than physical "things" and their interactions. Only physical things and their interactions can emerge from the interactions of things. The level of "things" and their interactions includes all neurons and their synaptic biological and chemical interactions.

From a good source on this
You give me the impression you are opposed to science creating self awareness. Is my impression correct?
 
You give me the impression you are opposed to science creating self awareness. Is my impression correct?

No - I'm not opposed to it, it's just that it is not possible in my opinion. For many reasons I don't believe that science will ever be able to create consciousness and self awareness - it's a science fiction mirage that perpetually recedes in front of the AI researchers.

This really boils down to the debate between proponents and skeptics over the reality of psi, ESP and the afterlife. If these things exist, as empirical evidence indicates, consciousness can't be the result of physical processes in the brain, and therefore consciousness can't be an emergent property of physical brain processes. Also, this implies that no artificial neural networks or other AI technologies will ever be able to create consciousness (even nonhuman).

There is also the philosophical argument against emergence, from the fundamentally different natures of physical things and consciousness. This was covered in the last post.

I guess we'll see. So far AI research has only made baby steps on the periphery of the main problems, with no sign on the horizon of any sort of breakthrough. For instance, on natural language understanding and conversation.
 
This really boils down to the debate between proponents and skeptics over the reality of psi, ESP and the afterlife. If these things exist, as empirical evidence indicates, consciousness can't be the result of physical processes in the brain, and therefore consciousness can't be an emergent property of physical brain processes.

Why do you think the existence of psi and ESP would imply that "consciousness can't be the result of physical processes in the brain"?
 
Why do you think the existence of psi and ESP would imply that "consciousness can't be the result of physical processes in the brain"?
That's what I'd like to know. Everytime I've read this from someone, there's always missing details, in other words, there's no breadcrumb trail to follow in the reasoning process leading to such a conclusion.
 
I can give what I think is a reason. I suspect that life structures are arranged in a nested fashion, from within, during the course of evolution, by means of a principle that is as much "experiential" as it ever was "physical." The key issue in that case comes down to whether any "emulations" of that process are in principle possible, and I am inclined to doubt it. We could not arrange "from without" a process that must self-induce "from within." It therefore comes to the crunch, imo, of whether this selfsame process can be "induced" in materials that this far, in our cosmos, have not been "lifelike."

That question can't be answered (though again, I am inclined to doubt it). Once again though, even if this happens, in the scenario described above you would have the actual life process again, and not "artificial life." In other words, I don't think a substitute is possible.
 
Why do you think the existence of psi and ESP would imply that "consciousness can't be the result of physical processes in the brain"?
"Psi" typically defines a group of phenomena that operate more freely, if not completely unbound, of space and time than the usual types of communication we have built with technology (e.g.. precognition, remote viewing etc...) It typically also includes mediumistic / afterlife communication.

While the former may not entirely invalidate the mind=brain paradigm, it would certainly make it even more impenetrable than what it already is.

The latter decisively falsifies it.
 
No - I'm not opposed to it, it's just that it is not possible in my opinion. For many reasons I don't believe that science will ever be able to create consciousness and self awareness - it's a science fiction mirage that perpetually recedes in front of the AI researchers.

What if some physical structures like microtubules turn out to be the "receiver" of fields of consciousness? Couldn't we theoretically create such a receiver in a robot brain? Apparently there are spiritual entities or fields of consciousness out there milling about as though they were at a busy arcade waiting for a free controller to engage in this reality. If we were to create a robot with these receivers couldn't we possibly open up a controller for one of these consciousness fields to take control?
 
What if some physical structures like microtubules turn out to be the "receiver" of fields of consciousness? Couldn't we theoretically create such a receiver in a robot brain? Apparently there are spiritual entities or fields of consciousness out there milling about as though they were at a busy arcade waiting for a free controller to engage in this reality. If we were to create a robot with these receivers couldn't we possibly open up a controller for one of these consciousness fields to take control?
Fascinating, isn't it? I thought the same, but I am afraid that it probably won't be that simple...
Microtubules exist in all sorts of cells, not just in brain cells. And nervous cells are found all over the place in our bodies... suggesting consciousness is also distributed and not just confined to the brain, even if it's the main "headquarter".

I find it unlikely we'll be able to do what you suggest with robots as defined by today standards, e.g. non biological machines.
 
Last edited:
Fascinating, isn't it. I thought the same, but I am afraid that it probably won't be that simple...
Microtubules exist in all sorts of cells, not just in brain cells. And nervous cells are found all over the place in our bodies... suggesting consciousness is also distributed and not just confined to the brain, even if it's the main "headquarter".

I find it unlikely we'll be able to do what you suggest with robots as defined by today standards, e.g. non biological machines.

Yep, I've heard lots of stories of people getting organ transplants and suddenly assuming certain personality or memory elements of the identity of the donor. A lady I know from work has seen this happen with her father who had a heart transplant. She said he's just not the same person since the transplant - his personality changed.

I think throwing in artificial microtubules into an AI brain is a bit like flying a kite in a thunderstorm to try and capture the electric force - we don't exactly understand the forces we're dealing with or how they are conducted, but we could probably bet on conducting something.
 
Yep, I've heard lots of stories of people getting organ transplants and suddenly assuming certain personality or memory elements of the identity of the donor. A lady I know from work has seen this happen with her father who had a heart transplant. She said he's just not the same person since the transplant - his personality changed.

I think throwing in artificial microtubules into an AI brain is a bit like flying a kite in a thunderstorm to try and capture the electric force - we don't exactly understand the forces we're dealing with or how they are conducted, but we could probably bet on conducting something.

At the time I was both fascinated and perturbed by Bandyopadhyay's suggestion at his 2010 Google tech talk that he wanted to talk to the Microtubule...
 
Yep, I've heard lots of stories of people getting organ transplants and suddenly assuming certain personality or memory elements of the identity of the donor. A lady I know from work has seen this happen with her father who had a heart transplant. She said he's just not the same person since the transplant - his personality changed.

To be fair, I've seen lots of people have personality changes after surviving a medical crisis, without having organs transplanted. I think you'd be hard-pressed to separate the effects of a dramatic experience (let alone the side-effects of the anti-rejection drugs) from the effects of a small portion of novel tissue in the body. People love the story, though.

Linda
 
To be fair, I've seen lots of people have personality changes after surviving a medical crisis, without having organs transplanted. I think you'd be hard-pressed to separate the effects of a dramatic experience (let alone the side-effects of the anti-rejection drugs) from the effects of a small portion of novel tissue in the body. People love the story, though.

Linda


This is true; however, reportedly there are not just random changes, but changes that correspond to specific traits of the donor such as having never liked jazz until post-op and then finding out the donor was a jazz musician. I seem to recall some even report having memories from the donor.
 
This is true; however, reportedly there are not just random changes, but changes that correspond to specific traits of the donor such as having never liked jazz until post-op and then finding out the donor was a jazz musician. I seem to recall some even report having memories from the donor.
Oh, well that proves it then. Nobody picks up an interest in jazz willingly. ;)

Linda
 
"Organ transplant awareness" does seem, at least, to be an interesting possibility though. Difficult to work with experimentally, as even very few transplantees seem to have these "memories." Still, I wouldn't want to dismiss it outright. If biological systems are nested experiential systems, what does it mean to suddenly find part of your physiology "cross pollinated" with something that was previously part of another nested system, with its own history and perhaps memory? Sometimes new questions are what leads to new discovery.
 
"Psi" typically defines a group of phenomena that operate more freely, if not completely unbound, of space and time than the usual types of communication we have built with technology (e.g.. precognition, remote viewing etc...) It typically also includes mediumistic / afterlife communication.

While the former may not entirely invalidate the mind=brain paradigm, it would certainly make it even more impenetrable than what it already is.

The latter decisively falsifies it.

I know a lot of people here think it's very unlikely an artificial brain could exhibit ESP, but I can't see that there would be any kind of logical contradiction involved in that, as nbtruthman seemed to be implying.

I also don't think that (as Hurmanetar's comment suggests) a lot of people have really thought through the implications of the "brain as a transceiver" concept. Namely, that an artificial physical system - whether composed of biological or non-biological components - could be conscious in just the same way that we are, if it was capable of simulating the relevant transception processes. How would that be different in practice from the abhorred idea of a conscious artificial intelligence?
 
"Organ transplant awareness" does seem, at least, to be an interesting possibility though. Difficult to work with experimentally, as even very few transplantees seem to have these "memories." Still, I wouldn't want to dismiss it outright. If biological systems are nested experiential systems, what does it mean to suddenly find part of your physiology "cross pollinated" with something that was previously part of another nested system, with its own history and perhaps memory? Sometimes new questions are what leads to new discovery.
I dunno. Don't you suspect that like most everything else, it will begin to disappear upon careful inspection?

Linda
 
I'm just saying. I don't have an allegiance to the idea or anything. But perhaps it is not impossible? Nature is still capable of surprising us.
 
I can give what I think is a reason. I suspect that life structures are arranged in a nested fashion, from within, during the course of evolution, by means of a principle that is as much "experiential" as it ever was "physical." The key issue in that case comes down to whether any "emulations" of that process are in principle possible, and I am inclined to doubt it. We could not arrange "from without" a process that must self-induce "from within." It therefore comes to the crunch, imo, of whether this selfsame process can be "induced" in materials that this far, in our cosmos, have not been "lifelike."

All you're saying is that this may be impossible because nature is so ... I do not know if artificial consciousness is possible, but at least I did not invent incomprehensible things ...
 
Back
Top