Is the Brain analogous to a Digital Computer? [Discussion]

  • Thread starter Sciborg_S_Patel
  • Start date
Can someone explain, in a concise way, how that works? Preferably with not to much math.
Here is one simplified description, from http://www.scottaaronson.com/democritus/lec10.5.html:

"Gödel's First Incompleteness Theorem tells us that no computer, working within a fixed formal system F such as Zermelo-Fraenkel set theory, can prove the sentence

  • G(F) = "This sentence cannot be proved in F."
But we humans can just "see" the truth of G(F) -- since if G(F) were false, then it would be provable, which is absurd! Therefore the human mind can do something that no present-day computer can do. Therefore consciousness can't be reducible to computation."

It's tough to get much more detailed without diving in the whole morass.

~~ Paul
 
My argument is not to try to prove the simulant is conscious by experimentation - because after all, this is a TE - so it isn't possible to get the results!
I agree, but i also think the spirit of the TE is assuming it is possible and see if we encounter faults in the logic.
I am arguing that if the simulant is indeed conscious as the program runs, some very strange things follow.
I agree again, why not discuss these strange things?
So by reductio ad absurdum, I think this is a strong argument that the simulant is not consciously aware.
Here you lose me. What is the strong argument? you made your argument about output statements, I explained why think it is based on a wrong premise.
I will restate the question i asked about that.

The output O would be a set of numbers, they represent a static state of the simulated person, that state does no work at all.
The audio that was spoken by the simulant can not be derived from the output O.
To know the words the simulant spoke, we need to record them through the interface while the program runs.

So, to know whether our simulant behaves consciuos we need to go through every step from state P to state O.
Do you agree with that, and if not why?

OK - well if you bear the above in mind, the rest makes sense. You are thinking of a Turing test, but this is not like that.

David

How did you come to that conclusion?
 
Last edited:
I don't think it stands up to scrutiny.

Fair enough.

Godel is about certain kinds of arithmetic. And there is no reason to believe that humans are consistent or complete.

Gödel applies to formal systems, which can then be extended to algorithms, which are the foundations of Turing Machines. Right? That's Penrose's point at least (pretty sure). It's not about whether we're complete and consistent as human beings, it's about our ability to ascertain mathematical 'truth' without needing to resort to a formal systemic/algorithmic process; and its about whether a machine can ascertain mathematical 'truth' in the same manner and know when to terminate/stop in response to such an input.

There is hardly an argument ever made in science that is ironclad, so I accept there could be holes in such an approach. But I also view Penrose as someone who doesn't get too married to his own ideas to save face (just look at the way he often downplays the promise of Twistor theory, and even admits he could be wrong throughout his books). He's gone out of his way over the years, unlike many others, to address almost every conceivable criticism that seems relevant. If nothing else I respect him for that. Perhaps there are other Achilles' heel type arguments I'm not aware of however. Have you come across this site before?

http://www.calculemus.org/MathUniversalis/NS/10/01penrose.html

Bart V:

OK, like you already know, Penrose's argument is that consciousness is a non-computable phenomenon that is described by the OR process (objective reduction of the wave function).

Gödel's theorem states that mathematical statements cannot be both "complete" and "consistent". In other words, there are certain questions mathematics simply cannot approach/solve for in a formal way. Penrose has identified what he believes to be a number of "non-computable" problems in mathematics, such as 'tiling the plane', etc. There are also hints of certain geometrical mappings of space-time that are non-algorithmic. Gödel's theorem applies to formal systems. Algorithms are formal systems, so therefore algorithms are subject to GT. This means any AI that is a Turing Machine at base level (i.e. runs on algorithms, and terminates/stops based on comprehension of those algorithms), would run into formal systems that it cannot discern whether to terminate on (i.e. solve). Humans can recognize that certain 'mathematical truths' have non-algorithmic solutions, agree on it, and move on. This is supposedly what distinguishes our conscious reasoning process from machines.

There's more but that's the long and short of it, even though I'm undoubtedly forgetting or misrepresenting something.

Regards,
John
 
Last edited:
1) Bart seems to think that the simulation would create emotion, though he makes the distinction that it is the simulant that experiences the emotion. I think you are probably roughly of the same opinion as Bart.

I would argue that there are emotions based in the mind and emotions guided by chemicals. A platonic interest in a person or topic will likely be mind-based, so there is no reason that would cease to exist in a Singularity-type environment. It could be a conscious (or unconscious) logic that this person does a thing you like, or they remind you of some other memory you like, which doesn't necessarily require hormones. Hormonal emotions (like a sudden paralyzing panic) probably would not be present, because those chemicals wouldn't exist.

Interesting to think about, but comes back to where you draw the lines of what "emotion" actually means when starting the argument.

3) The theoretical physicist, Roger Penrose, takes the view that consciousness must depend on some non-computable physics, which by definition could not be simulated on a computer!

Doesn't this go in to the domain of the turing test?

I'm not sure people are getting my central point. A computer program that generates output without input (or with some fixed input) simply reveals a pre-existing truth (or theorem if you prefer). For that reason, it seems to me to be particularly strange to associate emotions with this process, because the whole thing has always existed - like Pythagoras' theorem.

If a mind is stored as weights of neurons, then the "mind-based" emotions would be part of the active neuron state. Whether receptors are clogged, active, or receiving chemicals to distort those values in a specific way ("body-based" emotions) would be a separate part of the design. So its reasonable to assume an AI could have something like emotions.
 
"Gödel's First Incompleteness Theorem tells us that no computer, working within a fixed formal system F such as Zermelo-Fraenkel set theory, can prove the sentence

  • G(F) = "This sentence cannot be proved in F."
But we humans can just "see" the truth of G(F) -- since if G(F) were false, then it would be provable, which is absurd! Therefore the human mind can do something that no present-day computer can do. Therefore consciousness can't be reducible to computation."

I wondered where the "This next sentence is true. The previous sentence is false." joke came from.

Now it makes a lot more sense.
 
Here is one simplified description, from http://www.scottaaronson.com/democritus/lec10.5.html:

"Gödel's First Incompleteness Theorem tells us that no computer, working within a fixed formal system F such as Zermelo-Fraenkel set theory, can prove the sentence

  • G(F) = "This sentence cannot be proved in F."
But we humans can just "see" the truth of G(F) -- since if G(F) were false, then it would be provable, which is absurd! Therefore the human mind can do something that no present-day computer can do. Therefore consciousness can't be reducible to computation."

It's tough to get much more detailed without diving in the whole morass.

~~ Paul
Thanks, much more readable than the walls of text Sciborg provided

From the link you provided:
Perhaps Turing himself said it best: "If we want a machine to be intelligent, it can't also be infallible. There are theorems that say almost exactly that."
sums it up nicely i think
 
Thanks, much more readable than the walls of text Sciborg provided

My point was that you shouldn't trust any summarization of such a complex concept. While I think the use of "stupid" is unnecessary, as Bertrand Russel once said:

“A stupid man's report of what a clever man says can never be accurate, because he unconsciously translates what he hears into something he can understand.”


And while many of Penrose's critics are clearly brilliant, this translation problem seems to have occurred in a few places.

At the least, without reading Lucas's and Penrose's original arguments one should simply remain agnostic about the whole thing.
 
Gödel applies to formal systems, which can then be extended to algorithms, which are the foundations of Turing Machines. Right?
It applies to arithmetic axiom systems of certain kinds. I don't believe it applies to any sort of algorithm. ... Well, actually, it does.

It's not about whether we're complete and consistent as human beings, it's about our ability to ascertain mathematical 'truth' without needing to resort to a formal systemic/algorithmic process; and its about whether a machine can ascertain mathematical 'truth' in the same manner and know when to terminate/stop in response to such an input.
If we assume a consistent system of axioms, then Godel says the system won't be complete. For this to be interesting with respect to humans, we have to assume that all our thinking is arithmetical and consistent. Then we can infer that we are not complete.

Who claims that all our thinking is arithmetical, that we are confined to that system, that we are consistent, or that we are complete?

Highly recommended:

http://www.amazon.com/Gödels-Theorem-Incomplete-Guide-Abuse/dp/1568812388

~~ Paul
 
Last edited:
My point was that you shouldn't trust any summarization of such a complex concept.
Agreed. But I'd go further and say you shouldn't even trust many formalizations of the concept.

At the least, without reading Lucas's and Penrose's original arguments one should simply remain agnostic about the whole thing.
The argument changed between The Emperor's New Mind and Shadows of the Mind.

Here's a bit more technical discussion, but not too hairy:

http://www.iep.utm.edu/lp-argue/#H3

~~ Paul
 
I agree, but i also think the spirit of the TE is assuming it is possible and see if we encounter faults in the logic.
Well the problem is that we don't really want this to degenerate into all the arguments about Turing tests. We want to decide if materialism implies that moving the logic from wetware to computer should affect consciousness.
I agree again, why not discuss these strange things?

Here you lose me. What is the strong argument? you made your argument about output statements, I explained why think it is based on a wrong premise.
I will restate the question i asked about that.
Well I am repeating myself from before, but the problem is that each time the program runs, it presumably causes the simulant to experience emotions again.

The big problem with that, is that the concept of executing the program isn't really very well defined - I mean, the program takes no input, so the compiler can pre-compile as much or as little of it as it likes - even down to the point that all the program did was output the results.

Furthermore, the program simply runs to verify a pre-existing fact: P=>O - so does the program even need to run at all?

Also, computer simulations are not normally supposed to do anything extra. A simulation of an oil well isn't supposed to gush oil, etc. So why precisely should this simulation actually cause emotion to be felt?
The output O would be a set of numbers, they represent a static state of the simulated person, that state does no work at all.
The audio that was spoken by the simulant can not be derived from the output O.
To know the words the simulant spoke, we need to record them through the interface while the program runs.

So, to know whether our simulant behaves consciuos we need to go through every step from state P to state O.
Do you agree with that, and if not why?



How did you come to that conclusion?

I simply want to follow through the materialist concept of the brain. That seems to say that any process logically equivalent to what goes on in the brain would also be conscious. So if I assume materialism, I think it is reasonable to assume the simulant is conscious (if not, why not). So I don't need to test this, but to explore what this implies.

David
 
Also, computer simulations are not normally supposed to do anything extra. A simulation of an oil well isn't supposed to gush oil, etc. So why precisely should this simulation actually cause emotion to be felt?
Interesting question, but I think it's much more complex than this. For example, even a real oil well won't gush oil if there isn't any oil or if certain components of the rig are missing.

So does consciousness require a specific physical setup (e.g., a human brain), or does it simply require certain kinds of computation?

~~ Paul
 
Last edited:
When's the first time we'll ever see anything like a simulated brain?

I've heard 2036.
 
When's the first time we'll ever see anything like a simulated brain?

I've heard 2036.
My bet is never. People have a tendency to forget the history of all this. The subject of Artificial Intelligence (AI) was all the rage in the 1980's, and it was thought computers were finally powerful to make this possible. The remarkable thing was that the project didn't fail for lack of computer power, but because nobody had any success creating an AI. IMHO, dates like 2036 or 2023 are totally meaningless, because there are too many unknowns.

David
 
[quote="Paul C. Anagnostopoulos, post: 15404, member: 32"

So does consciousness require a specific physical setup (e.g., a human brain), or does it simply require certain kinds of computation?

~~ Paul[/quote]

What sort of physical setup are you thinking of? Something that could communicate with non-material entities perhaps?

Do you really believe that a specific kind of computation would give rise to consciousness? For one thing, all programs end up executing a very limited number of instructions, and these ultimately end up shuffling bits!

David
 
My bet is never. People have a tendency to forget the history of all this. The subject of Artificial Intelligence (AI) was all the rage in the 1980's, and it was thought computers were finally powerful to make this possible. The remarkable thing was that the project didn't fail for lack of computer power, but because nobody had any success creating an AI. IMHO, dates like 2036 or 2023 are totally meaningless, because there are too many unknowns.
You may be right, but Blue Brain is a simulation of a brain, not an AI. Past AI projects were not brain simulations.

~~ Paul
 
Last edited:
What sort of physical setup are you thinking of? Something that could communicate with non-material entities perhaps?
I suppose that's one possibility, although I still don't understand how a material object interfaces to an immaterial object. Other possible requirements are true random numbers or computation with reals.

Do you really believe that a specific kind of computation would give rise to consciousness? For one thing, all programs end up executing a very limited number of instructions, and these ultimately end up shuffling bits!
I've heard no compelling reason why consciousness can't be some kind of computational process. Perhaps "how it feels" in related to the means by which the computations are performed, so that a computer's consciousness would "feel different" to it than human consciousness does to us. We're being quite parochial when we assume that every consciousness is exactly like human consciousness.

If you believe that there is some sort of oracle to which we have access, it would be interesting to hear a description of that oracle, even a vague one. Otherwise you have nothing but promissory oracle-ism. :) How does that oracle escape the Church-Turing Thesis?

~~ Paul
 
My bet is never. People have a tendency to forget the history of all this. The subject of Artificial Intelligence (AI) was all the rage in the 1980's, and it was thought computers were finally powerful to make this possible. The remarkable thing was that the project didn't fail for lack of computer power, but because nobody had any success creating an AI. IMHO, dates like 2036 or 2023 are totally meaningless, because there are too many unknowns.

David

I think we'll have a simulation of the brain. Whether this thing can do everything humans can do is another question.

Of course, even if it fails researchers can always say conscious AI is perpetually 20 years away right...

That said David Deustch believes there are reasons the whole enterprise hasn't yielded results yet - it's more bad conceptions of the mind rather than impossibility of the task.

These phenomena have nothing to do with AGIs. The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of ‘good’ needs continual improvement. How should society be organised so as to promote that improvement? ‘Enslave all intelligence’ would be a catastrophically wrong answer, and ‘enslave all intelligence that doesn’t look like us’ would not be much better.

Intuitively I think this is a bunch of nonsense and AGIs aren't going to be conscious entities, but I accept I could be wrong about this.
 
Well the problem is that we don't really want this to degenerate into all the arguments about Turing tests. We want to decide if materialism implies that moving the logic from wetware to computer should affect consciousness.
I get that, but that is not what I am trying to do. I simply do not think that a static state can represent consciousness if it is a dynamic process.
The brain is actually never completely inactive, the closest we come to your static brain state is maybe anesthesia.

Well I am repeating myself from before, but the problem is that each time the program runs, it presumably causes the simulant to experience emotions again.
That is the logical conclusion, but I do not see what the problem is with that.

The big problem with that, is that the concept of executing the program isn't really very well defined - I mean, the program takes no input, so the compiler can pre-compile as much or as little of it as it likes - even down to the point that all the program did was output the results.

Furthermore, the program simply runs to verify a pre-existing fact: P=>O - so does the program even need to run at all?
I do not agree with that at all.

However, the logical conclusion stays the same whether I agree or not. We may have that discussion later.

We now need to assume that no matter how we get to O, at point O the subject will have the memory of having experienced everything that happened from point P to point O. If we assume a physical mind, we also have to assume she has the feeling of time having gone by.

Yet at no point in that timespan, looking at any state at any time, are we going to detect consciousness. That means consciousness is not immediate.
We could say that consciousness only exists as memory.

My speculation would be that we put everything into some sort of narrative. Not that we tell this narrative to our 'self', but more like if the narrative is the self.

Also, computer simulations are not normally supposed to do anything extra. A simulation of an oil well isn't supposed to gush oil, etc. So why precisely should this simulation actually cause emotion to be felt?
I thought the idea of this TE was to see if we could find a good reason why the simulant would not feel emotion.

I simply want to follow through the materialist concept of the brain. That seems to say that any process logically equivalent to what goes on in the brain would also be conscious.
Exactly.
So if I assume materialism, I think it is reasonable to assume the simulant is conscious (if not, why not).
Nicely put.
So I don't need to test this, but to explore what this implies.

David
I am not proposing any test, i am claiming that at the arbitrarily chosen static state O no consciousness is present. If we would run the simulation past the point O three days later, the subject would not experience any interruption in time.
 
Agreed. But I'd go further and say you shouldn't even trust many formalizations of the concept.

The argument changed between The Emperor's New Mind and Shadows of the Mind.

Here's a bit more technical discussion, but not too hairy:

http://www.iep.utm.edu/lp-argue/#H3

~~ Paul

I suspect that people just choose whichever viewpoint is in accordance with their prior assumptions, but it seems that even experts should just stay neutral on the topic.

Ideally once I get through Lucas's Reason and Reality I'll be in a better place to judge the landscape of Godelian arguments.
 
Back
Top