Is the Brain analogous to a Digital Computer? [Discussion]

  • Thread starter Sciborg_S_Patel
  • Start date
S

Sciborg_S_Patel

The threads on this subject - particularly reference Searle & Lanier, were gobbled up.

So I figure this thread can centralize these discussions.

Jaron Lanier, computer scientist & artist, author of You are Not a Gadget, argues against consciousness being reducible to computation.

=-=-=

Reposting a bunch of gobbled up stuff ->

Searle's Is Your Brain a Digital Computer

Also, a followup explaining why he wasn't a property dualist for rejecting AI as conscious entities.

John Searle - Consciousness and the Brain TED presentation

Watson Doesn't Know It Won on 'Jeopardy!'

Not sure if that last article will be behind a paywall or not. In any case it's largely a reiteration of Searle's Chinese Room argument.

=-=-=

Additionally, here's an essay where Lanier expounds on his argument against computational minds.

Also, a reply of sorts to Dennet -> You Can't Argue With a Zombie.

He also has an essay, Death: The Skeleton Key of Consciousness Studies?

Interesting guy in that he rejects computable minds on the one hand but rejects immortality of consciousness with the other.
 
Last edited by a moderator:
Is The Human Mind Algorithmic?

...Then, if the mind is algorithmic, most workers would say it is something like the sketch above, with variations for asynchronous updating of the formal neurons, or stochastic noise in the behavior of the formal neurons so they sometimes do the "wrong" thing, given their Boolean function and the activities of their inputs.

This is not far from one dominant view. Will it work for the human mind? I think not. We should already be deeply suspicious given Wittgenstein's language games...

Another view of the non-algorithmic character of the human mind comes from trying to do it. For example, computer scientists have invented the idea of "affordances", for object oriented programming. Here a computer object representing a real carburator is characterized by a finite definite set of affordances, "Is a", "Has a", "Does a", "Needs a". This move is wonderful and much has been done with it.

But do formal affordances suffice? I am convinced that the answer is "No".
 
Forget the human mind being a standard Turing machine, modern computers aren't, either. They are designed to run continuously, concurrently, have finite storage, access to true randomness, and interact with the outside world. You have to be very careful to define computable, algorithm, and Turing machine before embarking on this path.

~~ Paul
 
Last edited:
Forget the human mind being a standard Turing machine, modern computers aren't, either. They are designed to run continuously, concurrently, have finite storage, access to true randomness, and interact with the outside world. You have to be very careful to define computable, algorithm, and Turing machine before embarking on this path.

~~ Paul

True, but Kauffman does go out of his way to define a few things.

Seems to me that either there's an algorithm - whether its deterministic or probabilistic - or there isn't.

Though I think you might have a point about finite storage, strangely enough, if we accept the possibility that conscious awareness actually comes from information neglect rather than information retrieval. But AFAIK there's only one guy who isn't a philosophy or STEM PhD who proposed that.
 
You have to be very careful to define computable, algorithm, and Turing machine before embarking on this path.

Fortunately a Turing machine and algorithms are already well-defined in computer science and mathematics, so we can just use those.

Seems to me that either there's an algorithm - whether its deterministic or probabilistic - or there isn't.

I don't think questioning whether there is an algorithm or not is fruitful. Some of the most amazing constructions have amazingly simple algorithms in computer engineering; subtractive sound synthesis comes to mind, where some kind of random noise is first generated and then cut away (filter theory) until it has a character. I think its more fruitful to determine how much leniency exists in the algorithm for different outcomes.

Keep in mind that if consciousness is proof-positive irreducable, a smartass can always write it in math notation as C(...) which then technically counts as an algorithm.
 
Fortunately a Turing machine and algorithms are already well-defined in computer science and mathematics, so we can just use those.
Yes, they are well-defined, but those definitions aren't quite right for real computers or for brains. Turing's thesis was about functions and did not include, for example, interaction. There is no I/O to/from the tape.

http://link.springer.com/chapter/10.1007/11494645_20

I don't think questioning whether there is an algorithm or not is fruitful. Some of the most amazing constructions have amazingly simple algorithms in computer engineering; subtractive sound synthesis comes to mind, where some kind of random noise is first generated and then cut away (filter theory) until it has a character. I think its more fruitful to determine how much leniency exists in the algorithm for different outcomes.
I'm not sure I understand.

Keep in mind that if consciousness is proof-positive irreducable, a smartass can always write it in math notation as C(...) which then technically counts as an algorithm.
But who would he be kidding? If we find that humans have access to an oracle, we'll be all over it trying to figure out how it works. After all, Turing machines are explicitly oracle-free.

~~ Paul
 
Last edited:
The threads on this subject - particularly reference Searle & Lanier, were gobbled up.

So I figure this thread can centralize these discussions.

Jaron Lanier, computer scientist & artist, author of You are Not a Gadget, argues against consciousness being reducible to computation.

In this article he describes the same T(hought)E(xperiment). Probably a link you provided earlier, but got lost in the big gobble up.
The rainstorm TE resembles the one David proposed in the "stuck on stupid" thread.
It is an incarnation of a classic TE, the new part being the rainstorm and the "infinite computerstore"
The question is whether these bring something new to the TE, let us see.

He bases his TE on this:
So now your consciousness exists as a series of numbers in a computer; that is all a computer program is, after all.
Well no, your consciousness does not exist as a series of numbers, they can only recreate a conscious entity if they are implemented by a computer in the right context
He proposes ever more exotic carriers of these numbers , then he tries to create some infinite regress of raindrops numbers stores, but in the end that he always needs something to work with these numbers in the correct way.

He tries to sell this TE as a reductio ad absurdum of the concept that conscious behaviour can be replicated on a computer.
The only thing that gets more absurd are the ways he transports the information from the original entity to the replacement.

He forgets that it is not the rainstorm, not the dimples in the gummi bears, that are acting conscious, it is the simulant that behaves in a conscious way.
The simulant does not know, or care, about the medium that stores the numbers that describe the mini-universe he lives in. If used correctly, these numbers only describe the natural laws of the simulation.

I believe this type of TE is useful. If there are principal objections to a simulated brain to be found, that would help us a lot in thinking about consciousness, but i do not think Lanier succeeds.

I do think however that a few things narrow the scope of thinking, if we assume a material mind.

That is for another post though, the "ronde van vlaanderen" classic cycling race is getting in it's final stage and needs my urgent attention
 
Forget the human mind being a standard Turing machine, modern computers aren't, either. They are designed to run continuously, concurrently, have finite storage, access to true randomness, and interact with the outside world. You have to be very careful to define computable, algorithm, and Turing machine before embarking on this path.

~~ Paul
Yes - I would forget Turing machines and think about real computers with finite limits - though often these are very large.

Concurrency is something of a red herring (is that a US expression?) in that a concurrent process can always be replaced with a serial process on a faster machine, possibly equipped with a random number generator.

David
 
Bart,

Rather than discuss and criticise someone else's TE, why don't we design our own? That way if a TE has shortcomings, we can correct them right here.

I'd like to start with something like we discussed before - someone is scanned at the appropriate level, and either their brain, or their entire body is replaced by a numeric equivalent inside a computer.

As I explained before, supplying input creates something of a distraction IMHO. There are plenty of situations in which we simply think - we might do that in the dark in bed - contemplating something in a way which creates a sequence of emotional states.

I want to us to consider carefully what it means to run such a program, and whether it is reasonable to expect anything to be conscious as a result.

I suggest we agree the TE first, then think about what it implies!

David
 
Concurrency is something of a red herring (is that a US expression?) in that a concurrent process can always be replaced with a serial process on a faster machine, possibly equipped with a random number generator.

I think there are limits to this (square cube law for infrastructure, moore's law for transistors needed to build the CPU) though. Because of IBM we currently think of computing as having this "central chip" that has to take on all of the load, but this has shown to be an amazingly bad solution (see all the repeat work on getting GPUs to do the math work.)

On an Amiga, you have a very low-power serial processor ("consciousness"?) and a host of chips designed to be the best at one specific purpose ("lobes"?). A GPU does graphics, a math chip does the math, a sound chip handles sample mixing, etc. All of this happens in parallel to each other process using dedicated hardware, which allowed for extreme efficiency. "Modern computers" stupidly do most of the work on a CPU, occasionally trying to offload both math and graphics to a GPU while the sound card most often doesn't exist or does almost nothing more than transduction.

I think one of the closest ways to model a brain as a computer would be to have a tiny central CPU which is the "consciousness", the binding factor, whatever you want to call the piece that actually makes the decisions. And then the rest of that computer would be a mixture of a quantum computer and FPGA units. You need the FPGA part because neuroplasticity dictates the brain can essentially will changes to itself, and those are the closest computational equivalent of that. Quantum computing is maybe not strictly necessary, but a qubit sounds like a more desirable way to store weight values for neurons than classical bits.

Even given this incredibly powerful mixture of hardware for a synthetic brain, though, it would sit like a lemon. Without an external agent offering some kind of initial programming it would be completely incapable of doing anything, which brings us back to the bootstrapping/abiogenesis problem...
 
I'm 99.999 percent convinced that brains (and minds, brain generated or otherwise) can remote view. Can a digital computer?

Cheers,
Bill
 
Concurrency is something of a red herring (is that a US expression?) in that a concurrent process can always be replaced with a serial process on a faster machine, possibly equipped with a random number generator.
True, but it is still outside the standard definition of a Turing machine, which computes functions. That's why we have to be careful to specify exactly what we are talking about. And what happens when there are trillions of processors? Does that change in magnitude cause a change in kind?

It may be that all the differences I listed are red herring-ish. For example, perhaps interaction with the external world can be simulated by multiple Turing machines, some representing the external world, plus a new mechanism for I/O between tapes. But that new mechanism, again, is outside the standard definition.

~~ Paul
 
Bart,

Rather than discuss and criticise someone else's TE, why don't we design our own? That way if a TE has shortcomings, we can correct them right here.
why not? Seems a good idea, in a separate thread or here does not matter to me

I'd like to start with something like we discussed before - someone is scanned at the appropriate level, and either their brain, or their entire body is replaced by a numeric equivalent inside a computer.
sure, seems the way to go
As I explained before, supplying input creates something of a distraction IMHO. There are plenty of situations in which we simply think - we might do that in the dark in bed - contemplating something in a way which creates a sequence of emotional states.
That seems to be a point of slight disagreement. i see two problems, one minor and one essential.

Minor : the brain is never completely without sensory input. What's even more important, the way consciousness might function may mean that our conscious thoughts count on some level as input.

The more essential point is that we do not have a way to communicate with the simulated consciousness, how do you suggest we know whether the simulated entity is conscious if we can not ask it questions?

Also something to consider, where do we cut off the simulation, do we include the brainstem? spine? optic nerve bundles?

Since we give ourselves the almost limitless calculating power of a TE, why not simulate a small environment? Why leave out half of the function of the brain?

Maybe a simple scenario could solve the problem.What if we assume a subject that is terminally ill?
She knows that her brain states are going to be uploaded to a computer.
We give here a virtual body, a small virtual reality environment to operate in, and an interface to enable two way communication between her and the real world.

I want to us to consider carefully what it means to run such a program, and whether it is reasonable to expect anything to be conscious as a result.

I suggest we agree the TE first, then think about what it implies!

David

I agree that we have to be careful, that is why i think it is important to not leave out things that are essential to how our real minds work.
 
We can't answer this question until we know how remote viewing works, if it does.

~~ Paul
There is good evidence for remote viewing (hey, Wiseman even used the P word), but you're right we don't know how it functions. I guess I find this analogy not very useful because we know 100 percent how computers work, but very little about the brain as it relates the interesting areas of consciousness. I'm guessing that when we do figure out how remote viewing works it will be one of those things brains can do that computers can't, like generating new cells.

Cheers,
Bill
 
Last edited:
why not? Seems a good idea, in a separate thread or here does not matter to me


sure, seems the way to go

That seems to be a point of slight disagreement. i see two problems, one minor and one essential.

Minor : the brain is never completely without sensory input. What's even more important, the way consciousness might function may mean that our conscious thoughts count on some level as input.

The more essential point is that we do not have a way to communicate with the simulated consciousness, how do you suggest we know whether the simulated entity is conscious if we can not ask it questions?

Also something to consider, where do we cut off the simulation, do we include the brainstem? spine? optic nerve bundles?
Let's take the whole body - so she had better not be terminally ill - otherwise her simulation might just utter "Ugh!"

Since we give ourselves the almost limitless calculating power of a TE, why not simulate a small environment? Why leave out half of the function of the brain?

Maybe a simple scenario could solve the problem.What if we assume a subject that is terminally ill?
She knows that her brain states are going to be uploaded to a computer.
We give here a virtual body, a small virtual reality environment to operate in, and an interface to enable two way communication between her and the real world.

OK - lets provisionally go with that. So there will be input, I, but it can perfectly well be null.

I agree that we have to be careful, that is why i think it is important to not leave out things that are essential to how our real minds work.

So at this point the 'theorem' can be

P+null +> O

Where O is some output - which could be audio since we have her whole body digitised - which describes her mental state.

At this point I claim there is a paradox, because you want the 'simulant' to experience the emotions each time the program P is run. The real problem is that the theorem that O will be the output of P already existed (in an abstract sense) before P was ever run. The fact that O will be the output is analogous to Pythagoras' Theorem - so if running P generates O, how do we associate it with the theorem? If a super-clever person understands the theorem, would that count as an execution of the program - and so generate these emotions?

Put a slightly different way, if executing P actually generates emotions (not just output) there has to be a way to determine exactly how many times it executes. But things like considerations of the theorem make this very unclear.

Note that the theorem actually existed before the person was scanned, or even born!

David
 
Last edited:
I think there are limits to this (square cube law for infrastructure, moore's law for transistors needed to build the CPU) though. Because of IBM we currently think of computing as having this "central chip" that has to take on all of the load, but this has shown to be an amazingly bad solution (see all the repeat work on getting GPUs to do the math work.)

On an Amiga, you have a very low-power serial processor ("consciousness"?) and a host of chips designed to be the best at one specific purpose ("lobes"?). A GPU does graphics, a math chip does the math, a sound chip handles sample mixing, etc. All of this happens in parallel to each other process using dedicated hardware, which allowed for extreme efficiency. "Modern computers" stupidly do most of the work on a CPU, occasionally trying to offload both math and graphics to a GPU while the sound card most often doesn't exist or does almost nothing more than transduction...............

Goodness - this is a thought experiment (TE) not a proposal to do it for real. Besides we would need ethical approval to do it for real :)

Remember that if P generates emotions in the 'simulant', it will presumably still do so in a slowed down version - so the calculation could take 1 billion years if you want, or be implemented in some fancy hardware based on quark-gluon interactions!

David
 
Last edited:
I'm 99.999 percent convinced that brains (and minds, brain generated or otherwise) can remote view. Can a digital computer?

Cheers,
Bill
I think this enterprise is designed to show that:

a) A computer simulation of a brain can't be equivalent to the real brain in operation using Reductio Ad Absurdum.
b) Argue that the mind can't therefore be purely physical.

Once we establish that the mind can't be purely physical, all ψ phenomena - including remote viewing - become possibilities, but not in the simulation!

David
 
I think this enterprise is designed to show that:

a) A computer simulation of a brain can't be equivalent to the real brain in operation using Reductio Ad Absurdum.
b) Argue that the mind can't therefore be purely physical.

Once we establish that the mind can't be purely physical, all ψ phenomena - including remote viewing - become possibilities, but not in the simulation!

David
David, thanks for explaing it in that way--that helps. But I think I'm still not grasping the usefullness because the first two things that come to my mind are:

a) a simulation of any non-trivial, complex construct will never be totally equivalent to the real thing in at least some important ways.
b) I'm a little fuzzy on the definition of "physical," especially when we consider quantum physics. I mean, people like Penrose and Ed May have ideas and theories that they believe may explain remote viewing, but that would still be considered "physical."

Cheers,
Bill
 
There is good evidence for remote viewing (hey, Wiseman even used the P word), but you're right we don't know how it functions. I guess I find this analogy not very useful because we know 100 percent how computers work, but very little about the brain as it relates the interesting areas of consciousness. I'm guessing that when we do figure out how remote viewing works it will be one of those things brains can do that computers can't, like generating new cells.
There is at least one thing that computers generate that we don't understand: neural networks.

~~ Paul
 
Back
Top