A robot prepared for self-awareness

#41
I don't really accept that. it is one thing to say that the brain has hidden communicative powers within itself, but quite another to attribute such things to constructed artefacts, where a group of people can tell you exactly how they work.
I think it is possible that certain artifacts are so complex that develop traits unknown to its developers, especially if they are artifacts capable of self-evolution.
 
#42
even if one ignores the hard problem i think trying to assert presence of consciousness using functional criteria is highly problematic. since the emergent functionality is a predictable result (or one of a set of predictable results) that follow directly by a mappable process off of the initial function set. that a program can add to its own code (with possibly stochastic variables) may make for interesting new behaviour but does not indicate a form of consciousness. now if the robot produces a godel-type result, that would be another matter. but it can't.
 
#43
True consciousness, the ability to think is also the ability to conceptualize. it is a completely immaterial process involving forming pictures in the mind. It's not clear how any machine is supposed to accomplish this by pushing electrons around. Unless the electrons themselves are simply small conscious entities already that can form to create one big one. The information has to come from somewhere.
 
#44
True consciousness, the ability to think is also the ability to conceptualize. it is a completely immaterial process involving forming pictures in the mind. It's not clear how any machine is supposed to accomplish this by pushing electrons around. Unless the electrons themselves are simply small conscious entities already that can form to create one big one. The information has to come from somewhere.
i think there are some statements here that are simply false -- congenitally blind people don't necessarily form pictures (and if you believe v. noratuk's NDE account, they know the difference). others are pure speculation -- nobody knows if consciousness is completely immaterial (given that it definitely interacts with the brain, it may be at least partially material). and others seem to reflect sloppy thinking -- it is equally unclear how we accomplish consciousness too. plus it's not clear what 'material' even means given what we know about QM for about a century now.

that said, this just appears to be a reminder that there is the hard problem, which everyone who discusses this subject is aware of and aware that it is hard. my point is that if one is going to at least try to conceive of some sort of objective signs for consciousness (which is a subjective entity), then noting that a built-in self-complexification scheme operates successfully as it was programmed to is quite useless to do that.

so i am kind of agreeing with you but have something else to say beyond a restatement of the hard problem if we are evaluating machines. if the machine can produce a godel-type result, that would give me moral pause to destroy it and throw it into the trash bin because it has transcended beyond being a very sophisticated information processing machine, i.e., bunch of axioms + a stochastic algorithm. i am certainly not saying it is a necessary condition for consciousness (cats probably don't produce godel-type results), but it may be sufficient because one can argue that requires genuine understanding.
 
#45
I think it is possible that certain artifacts are so complex that develop traits unknown to its developers, especially if they are artifacts capable of self-evolution.
Obviously people could just throw stuff together to 'make' something which might, by pure chance have any properties, but people constructing AI programs/gadgets have a specific idea in mind. They are no more likely to accidentally create a consciousness filter, than they are to accidentally create a coffee grinder!

David
 
#46
Obviously people could just throw stuff together to 'make' something which might, by pure chance have any properties, but people constructing AI programs/gadgets have a specific idea in mind. They are no more likely to accidentally create a consciousness filter, than they are to accidentally create a coffee grinder!

David
But "they" are not in the dark because we know what a "consciousness filter" physically looks like - us! We have the plans.
 
C

Chris

#48
Obviously people could just throw stuff together to 'make' something which might, by pure chance have any properties, but people constructing AI programs/gadgets have a specific idea in mind. They are no more likely to accidentally create a consciousness filter, than they are to accidentally create a coffee grinder!
It depends what strategy they adopt, doesn't it? If they proceed by simulating the fundamental processes going on in the brain, isn't it quite likely that they will succeed in simulating the functions of the brain?
 
#49
i think there are some statements here that are simply false -- congenitally blind people don't necessarily form pictures (and if you believe v. noratuk's NDE account, they know the difference). others are pure speculation -- nobody knows if consciousness is completely immaterial (given that it definitely interacts with the brain, it may be at least partially material). and others seem to reflect sloppy thinking -- it is equally unclear how we accomplish consciousness too. plus it's not clear what 'material' even means given what we know about QM for about a century now.

that said, this just appears to be a reminder that there is the hard problem, which everyone who discusses this subject is aware of and aware that it is hard. my point is that if one is going to at least try to conceive of some sort of objective signs for consciousness (which is a subjective entity), then noting that a built-in self-complexification scheme operates successfully as it was programmed to is quite useless to do that.

so i am kind of agreeing with you but have something else to say beyond a restatement of the hard problem if we are evaluating machines. if the machine can produce a godel-type result, that would give me moral pause to destroy it and throw it into the trash bin because it has transcended beyond being a very sophisticated information processing machine, i.e., bunch of axioms + a stochastic algorithm. i am certainly not saying it is a necessary condition for consciousness (cats probably don't produce godel-type results), but it may be sufficient because one can argue that requires genuine understanding.
We do know that consciousness is completely immaterial. An idea, any idea really, can have an infinite number of physical and imaginary representations that can range in size from subatomic to the size of the universe or have no size at all or be a concept that defies any true material representation. There is no way to create something like that out of matter.
 
#50
even if one ignores the hard problem i think trying to assert presence of consciousness using functional criteria is highly problematic. since the emergent functionality is a predictable result (or one of a set of predictable results) that follow directly by a mappable process off of the initial function set. that a program can add to its own code (with possibly stochastic variables) may make for interesting new behaviour but does not indicate a form of consciousness. now if the robot produces a godel-type result, that would be another matter. but it can't.
Yes - an AI programmer clearly has some sort of 'behaviour' he/she would like his program to produce - let's say he wants it to 'discover' Pythagoras' theorem. His task then (though he won't see it that way) is to obscure the fact that this was his aim, and make the ultimate discovery seem more natural!

Does anyone know of anything of any real interest produced by random computer programs?

David
It depends what strategy they adopt, doesn't it? If they proceed by simulating the fundamental processes going on in the brain, isn't it quite likely that they will succeed in simulating the functions of the brain?
Well not really, would a simulation of a radio behave like a radio - not unless you picked up the radio waves, digitised them, and fed them in as data! In other words, if you don't understand at least something of the science of what is going on, you can't expect it to happen in a simulation!

David
 
#51
Yes - an AI programmer clearly has some sort of 'behaviour' he/she would like his program to produce - let's say he wants it to 'discover' Pythagoras' theorem. His task then (though he won't see it that way) is to obscure the fact that this was his aim, and make the ultimate discovery seem more natural!

Does anyone know of anything of any real interest produced by random computer programs?

David

Well not really, would a simulation of a radio behave like a radio - not unless you picked up the radio waves, digitised them, and fed them in as data! In other words, if you don't understand at least something of the science of what is going on, you can't expect it to happen in a simulation!

David
I don't know if it is possible or not, but wouldn't programming true AI be more about simulating the processes that produce consciousness (such as simulating brain processes) and then proceed to teach it in a similar way that we do with other conscious beings. As opposed to programming it specifically to accomplish particular tasks? Perhaps a mix of both?
 
C

Chris

#52
Well not really, would a simulation of a radio behave like a radio - not unless you picked up the radio waves, digitised them, and fed them in as data! In other words, if you don't understand at least something of the science of what is going on, you can't expect it to happen in a simulation!
I think that's a spectacularly bad example. If you produced an exact physical copy of a radio, of course it would behave like a radio.
 
#53
We do know that consciousness is completely immaterial. An idea, any idea really, can have an infinite number of physical and imaginary representations that can range in size from subatomic to the size of the universe or have no size at all or be a concept that defies any true material representation. There is no way to create something like that out of matter.
i'm sorry but this is wrong. material just means its state can be characterized entirely by all the physical variables of the system. platonic realm is obviously immaterial. nobody argues against that. interaction between the human brain (and mind) with the platonic realm of ideas is a non-trivial matter. you can't conflate one with the other. "infinity therefore immaterial" is not an argument. a mindless symbolic algebra program can handle lots of infinities. and even the lowly hydrogen atom has in infinite spectrum of states. you are not going to prove anything like this.

i think a little humility is in order, if we don't know we should say we don't know. what we understand as material changes by definitiion. there are probably important aspects of reality we have not discovered yet. is dark energy material? EM field concept was ridiculed as immaterial in the past. now we know better, and can't call it immaterial anymore. maybe there is some mind field and we may yet discover how it behaves and interacts with other fields. the demarcation moves around. plus as we learn more, the pattern always seems to be that greater understanding comes together with unification, not segregation. look at history of EM field. look at standard model. so i think fixation with material vs. immaterial is myopic. one can suppose that the entire universe is immaterial. whatever.

i think you are missing my point entirely. it's not about rehashing the same tired arguments about consciousness and material/immaterial, etc. nobody knows, and neither do you, and it doesn't matter. all i was saying is there is extreme difficulty with building a strong AI. unless the system can produce non-computable results, which is a very high (maybe impossible) bar, it seems hopeless to prove it is conscious. if you are satisfied that since consciousness is entirely immaterial therefore nothing produced by mankind (other than by fornicating) can be conscious no matter what, that doesn't make much sense. human brains are material after all...
 
#55
I think that's a spectacularly bad example. If you produced an exact physical copy of a radio, of course it would behave like a radio.
But only if you already knew that there were electromagnetic waves impinging on the radio carrying information! You have to know there is information coming from outside - as well as the medium - otherwise the simulation will fail abysmally!

David
 
#56
i'm sorry but this is wrong. material just means its state can be characterized entirely by all the physical variables of the system. platonic realm is obviously immaterial. nobody argues against that. interaction between the human brain (and mind) with the platonic realm of ideas is a non-trivial matter. you can't conflate one with the other. "infinity therefore immaterial" is not an argument. a mindless symbolic algebra program can handle lots of infinities. and even the lowly hydrogen atom has in infinite spectrum of states. you are not going to prove anything like this.

i think a little humility is in order, if we don't know we should say we don't know. what we understand as material changes by definitiion. there are probably important aspects of reality we have not discovered yet. is dark energy material? EM field concept was ridiculed as immaterial in the past. now we know better, and can't call it immaterial anymore. maybe there is some mind field and we may yet discover how it behaves and interacts with other fields. the demarcation moves around. plus as we learn more, the pattern always seems to be that greater understanding comes together with unification, not segregation. look at history of EM field. look at standard model. so i think fixation with material vs. immaterial is myopic. one can suppose that the entire universe is immaterial. whatever.

i think you are missing my point entirely. it's not about rehashing the same tired arguments about consciousness and material/immaterial, etc. nobody knows, and neither do you, and it doesn't matter. all i was saying is there is extreme difficulty with building a strong AI. unless the system can produce non-computable results, which is a very high (maybe impossible) bar, it seems hopeless to prove it is conscious. if you are satisfied that since consciousness is entirely immaterial therefore nothing produced by mankind (other than by fornicating) can be conscious no matter what, that doesn't make much sense. human brains are material after all...
Thanks for your thoughtful answer. I had to check my initial impulses at the door, a kind of reflex towards skeptical nonsense, because you've taken the time and effort to think this through. It is not nonsense, you make interesting and excellent points. Nevertheless I respectfully disagree. First, you can't hold up an algebra program as an argument because it falls under the umbrella of consciousness. It is a form of idea that does not exist outside of human consciousness so of course it is capable of infinity. And while a hydrogen atom my have in infinite spectrum of states, they have no ability to self arrange. They are driven by outside forces which determine their state. Consciousness though, can self arrange and infinity is an outcome of this.

If you don't look at consciousness this way, in my opinion, then you have no chance of realizing a conscious AI. I think that you have to create the conditions for consciousness to take hold, rather than trying to accomplish it through brute force computing. I think this can be done with a machine, but that it would look a a lot different than what we have now. We have to create a machine that allows consciousness to "move in." so to speak. Artificial neural networks combined with a method of sensory input seems to be the most promising answer at the moment. We will need a far different technology to make that workable though.
 
#57
i'm sorry but this is wrong. material just means its state can be characterized entirely by all the physical variables of the system. platonic realm is obviously immaterial. nobody argues against that. interaction between the human brain (and mind) with the platonic realm of ideas is a non-trivial matter. you can't conflate one with the other. "infinity therefore immaterial" is not an argument. a mindless symbolic algebra program can handle lots of infinities. and even the lowly hydrogen atom has in infinite spectrum of states. you are not going to prove anything like this.
Algebra programs are an interesting case in point because these started out as AI projects, but soon evolved into something quite different. For example a modern AI program doesn't do symbolic integration in any way akin to that used by a human, it uses the Risch algorithm, or elaborations on that concept.

AI was not a success - it was an idea to escape from! (I am not disagreeing with you!).
i think a little humility is in order, if we don't know we should say we don't know. what we understand as material changes by definitiion. there are probably important aspects of reality we have not discovered yet. is dark energy material? EM field concept was ridiculed as immaterial in the past. now we know better, and can't call it immaterial anymore. maybe there is some mind field and we may yet discover how it behaves and interacts with other fields. the demarcation moves around. plus as we learn more, the pattern always seems to be that greater understanding comes together with unification, not segregation. look at history of EM field. look at standard model. so i think fixation with material vs. immaterial is myopic. one can suppose that the entire universe is immaterial. whatever.
As I see it, any physical system that can be described by a set of equations (augmented by a random number generator) isn't going to capture mental processes. I am not sure I can prove that, but it seems to me obvious - well Roger Penrose has at least attempted to prove something similar.

To me, this seems to rule out explaining mind by anything that looks anything like physics - except perhaps invoking physics to explain some kind of a TV analogy of consciousness (which I think should be renamed at the Mars-rover analogy, because we need bi-directional communication).

David
 
C

Chris

#58
But only if you already knew that there were electromagnetic waves impinging on the radio carrying information! You have to know there is information coming from outside - as well as the medium - otherwise the simulation will fail abysmally!
If you built an exact physical replica of a radio, it would function as a radio regardless of whether you knew how or why it functioned.
 
#59
On the subject of a replacement to the Turing Test, I think we need something that demonatrates the shortcomings and vulnerablity of the human system.

How about AI that is subject to suggestion and expectation - AI that could be hypnotised by a human hypnotist?
 
#60
Thanks for your thoughtful answer. I had to check my initial impulses at the door, a kind of reflex towards skeptical nonsense, because you've taken the time and effort to think this through. It is not nonsense, you make interesting and excellent points. Nevertheless I respectfully disagree. First, you can't hold up an algebra program as an argument because it falls under the umbrella of consciousness. It is a form of idea that does not exist outside of human consciousness so of course it is capable of infinity. And while a hydrogen atom my have in infinite spectrum of states, they have no ability to self arrange. They are driven by outside forces which determine their state. Consciousness though, can self arrange and infinity is an outcome of this.
i disagree with this. a symbolic algebra program does not fall under umbrella of consciousness. it is nothing more than an algorithm (which can process infinities without understanding them, or understanding anything for that matter). i also submit to you that we do not know what is actually different between a quantum system "randomly" projecting down to a particular energy state from a superposition, and (ostendibly) conscious entity make some decision. QM projection outcomes are random and not determined by any forces. so there is no way to distinguish random from conscious choice. i don't like it, but i must admit that is unfortunately the situation. not that i have any idea of what a random process really is, since it is acausal by definition, and although that description is sufficient mathematically, ontologically it is embarassingly perplexing. the point is that i think the only things one can objectively use to discriminate a conscious process is non-computability. that's where AI has a huge problem, at least given where they are right now. because all they have at this point are algorithms. maybe very clever ones that can alter their own code and produce interesting behavior, but algorithms nonetheless.

If you don't look at consciousness this way, in my opinion, then you have no chance of realizing a conscious AI. I think that you have to create the conditions for consciousness to take hold, rather than trying to accomplish it through brute force computing. I think this can be done with a machine, but that it would look a a lot different than what we have now. We have to create a machine that allows consciousness to "move in." so to speak. Artificial neural networks combined with a method of sensory input seems to be the most promising answer at the moment. We will need a far different technology to make that workable though.
you may be right about that. i am not smart enough to propose how to do it. but we can discuss apriori what we can look for to consider if it was successful. unless the system can produce a godel-type result, what can one possibly use as an indicator that it is conscious and not a p-zombie? btw, i would put the subjective transcendent aspects of the mind (like qualia) into the non-computable category as well. the problem is their presence cannot be examined objectively, at all. anyway, we can just disagree.

As I see it, any physical system that can be described by a set of equations (augmented by a random number generator) isn't going to capture mental processes. I am not sure I can prove that, but it seems to me obvious - well Roger Penrose has at least attempted to prove something similar.
i agree with that. i am saying essentially the same thing as penrose. the implication is the same -- if we ever manufacture a conscious system, it probably cannot be what we currently understand to be a computer, i.e., an algorithm.

To me, this seems to rule out explaining mind by anything that looks anything like physics - except perhaps invoking physics to explain some kind of a TV analogy of consciousness (which I think should be renamed at the Mars-rover analogy, because we need bi-directional communication).
at least by what physics looks like now. our physics is probably incomplete. and we can at 1st consider less ambitious goals than describing it entirely (including qualia, etc). any lens- or filter-like model is still obliged to quantify the interface rules (obviously there are rules) and explain the binding problem, etc. i suspect that something very important will be understood at the quantum scale. it may not necessarily be some variant of penrose's quantum gravity microtubule ideas, but i say that just because that is the most important area in physics where we have a glaring gap. all the unitary machinery of quantum mechanics is self-contained and self-consistently evolves forever without a collapse ever taking place. which is a discontinuous non-unitary and totally ad-hoc process imposed by hand to compute what we need. it is totally divorced from the rest of the theory. so something fundamental is probably missing...
 
Top