Decoding 'the Most Complex Object in the Universe'

If you measure the number of transistors that comprise the internet, the number must be far greater that 10^11 because there are (say) a billion computers on the planet (I don't know the exact figure), each with its own CPU packed with transistors!

Perhaps that raises the question as to how anyone would know if the internet were conscious! I would argue you have to have some viable hypothesis (or maybe several) to begin to attack the problem - to even know what you are looking for. Sorry, but I don't expect much out of this project!

David
 
The universe could be harboring much more complex brains or systems than our brain. Who knows if ET and his big round head doesn't pack more firing power than good ole homo sap.
 
If you measure the number of transistors that comprise the internet, the number must be far greater that 10^11 because there are (say) a billion computers on the planet (I don't know the exact figure), each with its own CPU packed with transistors!
Right, but how many transistors does it take to emulate a neuron? Here is a circuit with 400 transistors that emulates one synapse:

http://web.mit.edu/newsoffice/2011/brain-chip-1115.html

~~ Paul
 
Well a modern CPU chip has approx 10^9 transistors, so with approx 10^9 such chips in PC's connected to the internet, the total number is going to comfortably exceed the complexity of the brain, even if you do need 400 transistors per synapse and (say) 2000 synapses per neuron.

Furthermore, people were more than happy to create neural nets using a much simpler model of the neuron, which would sum and threshold the inputs from the various synapses. The point is, it is very hard to see what the extra complexity gives you. It is also hard to see how you would detect consciousness in a complex system. I have confidence in one thing. You don't have the slightest qualms in forcing your computer to shut down - possibly at short notice if you are on a flight - you don't internalise your belief that computers can be conscious in any way at all!

David
 
Furthermore, people were more than happy to create neural nets using a much simpler model of the neuron, which would sum and threshold the inputs from the various synapses. The point is, it is very hard to see what the extra complexity gives you. It is also hard to see how you would detect consciousness in a complex system. I have confidence in one thing. You don't have the slightest qualms in forcing your computer to shut down - possibly at short notice if you are on a flight - you don't internalise your belief that computers can be conscious in any way at all!
I doubt that computers are consciousness, at least not in a way similar to humans. I suspect it requires certain kinds of organization. For example, it might require specific kinds of feedback loops.

~~ Paul
 
The problem is, the internet does act as a conduit for PSI because you can get feedback through this channel. So that would muddy the waters a bit regarding the net being conscious?
 
CNN is doing a special Anderson Cooper on NDEs tonight at 6pm est. It should very interesting. The interviewer for Mary Neal seemed more than a bit skeptical, but we'll see tonight.

John Graden
Author
"NDEs: Doctors and Scientists Go On the Record About God, Heaven, and the Afterlife"
 
I doubt that computers are consciousness, at least not in a way similar to humans. I suspect it requires certain kinds of organization. For example, it might require specific kinds of feedback loops.

~~ Paul
OK - but given a suitably large computer you can have any feedback loop you like in software, so are you saying that you doubt that computers can be conscious regardless of the software running - because if so, I don't see how you can make the type of feedback loops the decisive issue.

My feeling is that computers have exposed a fascinating distinction between what is possible in a purely physical way, and what many people think is happening in the brain, where something non-physical must be playing a part, and playing to vastly different rules from physical matter.

It is interesting to pursue the whole issue of whether a computer can or cannot become conscious because if a computer can't achieve it, that seems to rule out just about any physical system also - because a computer can simulate a physical system. Whether or not Penrose's Gödel's theorem proof is totally valid, I think it points the way to what is wrong with the idea of physical consciousness - that the rules of ordinary physical interactions just don't generate experience (qualia) - just as Newton's laws don't give you electromagnetic effects. You can run Newton's laws and they work very nicely, but only if it is valid to neglect electromagnetism.

David
 
It is interesting to pursue the whole issue of whether a computer can or cannot become conscious because if a computer can't achieve it, that seems to rule out just about any physical system also - because a computer can simulate a physical system.

Well, Penrose himself would deny that premise, as he tries to find a non-computational physics in his books.
 
OK - but given a suitably large computer you can have any feedback loop you like in software, so are you saying that you doubt that computers can be conscious regardless of the software running - because if so, I don't see how you can make the type of feedback loops the decisive issue.
Oh, I think computers can be conscious. They just aren't now. It's possible, however, that they might require some biological modules.

My feeling is that computers have exposed a fascinating distinction between what is possible in a purely physical way, and what many people think is happening in the brain, where something non-physical must be playing a part, and playing to vastly different rules from physical matter.
Why? Who has yet claimed to understand consciousness well enough to program it?

It is interesting to pursue the whole issue of whether a computer can or cannot become conscious because if a computer can't achieve it, that seems to rule out just about any physical system also - because a computer can simulate a physical system. Whether or not Penrose's Gödel's theorem proof is totally valid, I think it points the way to what is wrong with the idea of physical consciousness - that the rules of ordinary physical interactions just don't generate experience (qualia) - just as Newton's laws don't give you electromagnetic effects. You can run Newton's laws and they work very nicely, but only if it is valid to neglect electromagnetism.
Assuming that you are allowing any physical mechanism in the computer that is to be conscious, then, indeed, it ought to be able to be conscious. We are also assuming here that hypercomputation is not physically possible.

~~ Paul
 
Oh, I think computers can be conscious. They just aren't now. It's possible, however, that they might require some biological modules.
I like the way you hedge your bets - what exactly do you reckon those biological modules might do - communicate with a non-physical realm perhaps?

Why? Who has yet claimed to understand consciousness well enough to program it?
It is trivial to program a computer so that every time you press a key, it prints "Ouch - that hurt!"

A computer simply processes input and generates output (and a little heat) - and as that example illustrates, being conscious isn't about what output is generated.

David
 
I like the way you hedge your bets - what exactly do you reckon those biological modules might do - communicate with a non-physical realm perhaps?
Some quantum mechanical thing? I suspect whatever it does could be done in silicon, except it just might be easier biologically.

It is trivial to program a computer so that every time you press a key, it prints "Ouch - that hurt!"
Trivial to evolve it, too, without any need for consciousness.

A computer simply processes input and generates output (and a little heat) - and as that example illustrates, being conscious isn't about what output is generated.
It's not?

What does a conscious organism do other than process input and generate output? If you want to say "it thinks and feels," then explain how that is anything other than internal input/output.

~~ Paul
 
Whether or not Penrose's Gödel's theorem proof is totally valid, I think it points the way to what is wrong with the idea of physical consciousness - that the rules of ordinary physical interactions just don't generate experience (qualia)

Why, and how?

Can this be explained in a kind of dumbed-down way? (I am very not a mathematician.)
 
What does a conscious organism do other than process input and generate output? If you want to say "it thinks and feels," then explain how that is anything other than internal input/output.

~~ Paul
This is like trying to explain what vision is like to someone blind from birth - except that you have a sneaking suspicion that the person can also see perfectly well!

David
 
Why, and how?

Can this be explained in a kind of dumbed-down way? (I am very not a mathematician.)

I am not a mathematician either - so anyone who wants, feel free to improve on this!

Gödel was interested in the question of what could be proved with a given set of axioms. An axiom is just something that you assume from the outset to be true. Mathematicians has believed that a small set of axioms were sufficient to create the whole of mathematics by just using them to create truths - which you could think of as extra, but redundant axioms. Thus, you only need a tiny number of axioms for integer arithmetic to re-create an awful lot of number theory maths.

However, Gödel showed that once you had enough axioms to describe integer arithmetic, something strange happened - there were facts that could not be proved or disproved. Of course, you could just add the new fact to the set of axioms (or you could add the negation of that fact if you preferred). That way you had an augmented set of axioms that encompassed that dodgy fact F, but his theorem shows that there is another fact F' waiting to mess up the new axiom set. You can carry this process on indefinitely, with the original nice clean axiom set getting grubbier and grubbier as you add in ad-hoc extra stuff!

Penrose is interested in whether consciousness could arise purely from a physical process. He observes that a physical process is analogous to an axiom set, in that it can generate new facts (or theorems if you prefer) in a mechanical sort of way - there are just way more axioms than a mathematician would care to use. This means that any ordinary physical process - particularly a computer program - will be subject to the restrictions of Gödel's theorem. Since mathematicians seem to transcend Gödel's theorem as if they don't work from a set of axioms, but see through to the truth of things in some other way (inevitably this is disputed...) Penrose concludes that mathematicians (and by extension all of us) are not conscious by virtue of an axiom set (again disputed...), however his conclusion - at least in public - is that there must be some new physics which is 'non-computable' so it can't be described as a set of axioms.

What I find fascinating about all this, is that if you look at an artificial intelligence program, they usually respond to simple textual inputs (say) with an appropriate response - so for example, they may have been given the fact that birds fly, so someone types in:

Q: "Can a sparrow fly?"

C: "Yes!"

So the computer has the right answer. but then you try:

Q: "Can a penguin fly?"

Now the computer will get the wrong answer until it is given a list of birds that can't fly. This might seem reasonable enough, but then it goes on:

Q: "Can a dead bird fly?"

To answer this, the computer needs to have yet another 'axiom' that dead birds of any species don't fly! But even this soon breaks down.

Q: "Can a dead bird fly in an airplane?"

This seems awfully analogous to the cascade of extra axioms that I described above.

The problem is that the axioms never capture the essence of what real people think about - which somehow transcends a mere mechanical process.

David
 
Penrose is interested in whether consciousness could arise purely from a physical process. He observes that a physical process is analogous to an axiom set, in that it can generate new facts (or theorems if you prefer) in a mechanical sort of way - there are just way more axioms than a mathematician would care to use. This means that any ordinary physical process - particularly a computer program - will be subject to the restrictions of Gödel's theorem. Since mathematicians seem to transcend Gödel's theorem as if they don't work from a set of axioms, but see through to the truth of things in some other way (inevitably this is disputed...) Penrose concludes that mathematicians (and by extension all of us) are not conscious by virtue of an axiom set (again disputed...), however his conclusion - at least in public - is that there must be some new physics which is 'non-computable' so it can't be described as a set of axioms.

The problem here is that Godel applies only to certain kinds of arithmetical systems and simply states that no algorithm can enumerate all the truths in that system. Once the domain of thought is outside that system, Godel no longer applies. So your statement "as if they don't work from a set of axioms" should be "as if they don't work solely from certain kinds of arithmetical axiom sets." But we clearly don't, so Godel doesn't apply in an obvious way.

~~ Paul
 
The problem here is that Godel applies only to certain kinds of arithmetical systems and simply states that no algorithm can enumerate all the truths in that system. Once the domain of thought is outside that system, Godel no longer applies.
Actually, I think the requirement is that the domain of thought should contain integer arithmetic - it can contain other axioms as well. Remember also, that Penrose is a well thought of theoretical physicist - not just anyone of the street (or on the internet) - he defends his POV strongly, although I would concede that it simply isn't possible to get a really crisp proof about consciousness - not least because it has no definition!

David
 
Back
Top