Is the Brain analogous to a Digital Computer? [Discussion]

  • Thread starter Sciborg_S_Patel
  • Start date
I'm 99.999 percent convinced that brains (and minds, brain generated or otherwise) can remote view. Can a digital computer?

Cheers,
Bill

I think this enterprise is designed to show that:

a) A computer simulation of a brain can't be equivalent to the real brain in operation using Reductio Ad Absurdum.
b) Argue that the mind can't therefore be purely physical.

Once we establish that the mind can't be purely physical, all ψ phenomena - including remote viewing - become possibilities, but not in the simulation!

David

Can't computers "remote view" via Wifi? Or do we discount that because we know the mechanism? Wifi would look "supernatural" to someone from 200 years ago...
 
Can't computers "remote view" via Wifi? Or do we discount that because we know the mechanism? Wifi would look "supernatural" to someone from 200 years ago...
I'm not saying remote viewing is supernatural (in fact, I think it is "natural"). I think we're on the same page. I'm just saying that we designed computers, networks, and the communication protocols, so we won't learn anything about remote viewing and its unknown mechanism by making analogies to computers.

Cheers,
Bill
 
The machine generated it, running a program written by a human. But then you cannot look at the neural network and understand how it works.

~~ Paul
Right, but we knew enough about the way central nervous systems work to write artificial neural network programs that simulate them.

Cheers,
Bill
 
Right, but we knew enough about the way central nervous systems work to write artificial neural network programs that simulate them.
Yes, but nonetheless we don't know 100% how computers work. Or, I should say, we don't know 100% how software works.

Also, if true random numbers are involved, then computers can produce results whose history cannot be traced.

~~ Paul
 
Yes, but nonetheless we don't know 100% how computers work. Or, I should say, we don't know 100% how software works.

Also, if true random numbers are involved, then computers can produce results whose history cannot be traced.

~~ Paul
You're just saying the output is not deterministic under those conditions, and we know when that is the case. I'd not categorize that as "not knowing how they work," but I understand your point.

Cheers,
Bill
 
You're just saying the output is not deterministic under those conditions, and we know when that is the case. I'd not categorize that as "not knowing how they work," but I understand your point.

Cheers,
Bill

Yeah, I'd always assumed the black box of neural network results was more about complexity than a genuine mystery.

I don't claim expertise though, and am happy to be corrected on this.
 
Yeah, I'd always assumed the black box of neural network results was more about complexity than a genuine mystery.

I don't claim expertise though, and am happy to be corrected on this.

Its a complexity issue; how do you label the artificial neurons and how do you show them? The game "Democracy 3" is effectively based around artificial neurons with some window dressing, but a toy AI is going to have 20-100 of the things and an AI trying to perform complex tasks using artificial neurons alone will have a lot more. If you poke around with the dev kits for the "Creatures" series of games, you can see that even a simple model of a neural net-based brain and genetic-based lifeform is a nightmare to navigate.
 
Its a complexity issue; how do you label the artificial neurons and how do you show them? The game "Democracy 3" is effectively based around artificial neurons with some window dressing, but a toy AI is going to have 20-100 of the things and an AI trying to perform complex tasks using artificial neurons alone will have a lot more. If you poke around with the dev kits for the "Creatures" series of games, you can see that even a simple model of a neural net-based brain and genetic-based lifeform is a nightmare to navigate.
The point is, Gedanken experiments (TE) have their place in science - think of the diagrams in books on Special Relativity of people flashing messages to each other from trains or rocket ships.

Of course, a Gedanken experiment may assume something essential that makes it invalid, but if you want to argue that way you absolutely need to be more specific. I certainly think it is pointless to discuss such things as GPU's. Imagine the same discussion 20 odd years ago. It might have raised the issue of the Weitek Coprocessor, which was all the rage for a short while, but not often mentioned nowadays :)

There seems to be quite a spread of opinion on the question of brain simulations:

1) Bart seems to think that the simulation would create emotion, though he makes the distinction that it is the simulant that experiences the emotion. I think you are probably roughly of the same opinion as Bart.

2) I remember from past encounters that Paul is less sure. Of course, in that case the question is what exactly is missing in a brain simulation on a computer.

3) The theoretical physicist, Roger Penrose, takes the view that consciousness must depend on some non-computable physics, which by definition could not be simulated on a computer!

4) My view is that if the operation of the brain is analogous to a trans-receiver, then there is absolutely no reason to expect that a mere simulation of the brain would communicate with the mind - just as a simulation of a radio would only communicate with some simulated radio signals - if explicitly supplied.

I'm not sure people are getting my central point. A computer program that generates output without input (or with some fixed input) simply reveals a pre-existing truth (or theorem if you prefer). For that reason, it seems to me to be particularly strange to associate emotions with this process, because the whole thing has always existed - like Pythagoras' theorem.

Keeping the input fixed or non-existent, is of course simply a way to reveal the problem, with assuming a computer simulation of a brain (or an AI program) can be conscious - rather as Roger Penrose tries to show that human mathematicians can't be thinking algorithmically, but then extends that idea to all human thought because there is nothing fundamentally special about mathematicians.

David
 
Last edited:
Let's take the whole body - so she had better not be terminally ill - otherwise her simulation might just utter "Ugh!"



OK - lets provisionally go with that. So there will be input, I, but it can perfectly well be null.



So at this point the 'theorem' can be

P+null +> O

Where O is some output - which could be audio since we have her whole body digitised - which describes her mental state.

The output O would be a set of numbers, they represent a static state of the simulated person, that state does no work at all.
The audio that was spoken by the simulant can not be derived from the output O.
To know the words the simulant spoke, we need to record them through the interface while the program runs.

So, to know whether our simulant behaves consciuos we need to go through every step from state P to state O.
Do you agree with that, and if not why?

I certainly will go into the rest of your post, but I think this is an essential point. It would be easier to know what you think about this before we continue.
 
Last edited:
I think Penrose's arguments on non-computable consciousness constructed around Gödel's theorem still stand up to scrutiny. And whether or not you buy into Orch-OR as a model of consciousness is another matter entirely.

Searle's Chinese Room is not an infallible argument, but like Penrose's it is a strong one. Lanier's arguments, especially considering his expertise as a computer scientist, carry a lot of weight. Even Stuart Kauffman thinks the classical, computational model is inappropriate and thinks that consciousness (as well as understanding, morality, aesthetics, etc.), like Penrose, is a "poised state" phenomenon that A) operates at the quantum/classical border, and B) requires new physics.

Modern, industrialized humans have demonstrated the curious propensity to constantly model their physical bodies and their inner-selves after whatever technology is most in vogue during a particular era. I see this as one such historical episode. The classical (and/or quantum) computational model is used only because A) it feels comfortable and easier to model, and B) science is a generally conservative enterprise and usually proceeds through baby-steps and won't postulate anything particularly bold until they are either forced by a profound demonstration or explore every nook and cranny of the old paradigm first.

And arguing that consciousness is purely a matter of "the right connections" doesn't stand up to scrutiny in my view either. I think network connections are certainly part of the consciousness puzzle, but It's a borderline faith-based argument backed up by very limited evidence to state complex neural connections = consciousness.
 
Last edited:
I think Penrose's arguments on non-computable consciousness constructed around Gödel's theorem still stand up to scrutiny. And whether or not you buy into Orch-OR as a model of consciousness is another matter entirely.

Lucas - a materialist who also worked on the Godelian argument felt the same way, and thought the majority of objections attacked a strawman.

The critic may still complain that doing arithmetic in non-non-standard models is still a pretty boring activity; but that is not the point. Gödel's theorem shows that there is conceptual room for creativity, by allowing that to be reasonable is not necessarily to be in accordance with a rule. We can see how it works out with the style of great creative artists. Titian, or Bach, or Shakespeare, develop a style which is peculiarly their own, and which we can learn to recognise. But it is not static. Up to a point they can produce works that are variations on a theme, but beyond that point we begin to criticize, and say that they are painting, composing, or writing, according to a formula; it is the mark of second-rate artists to be content to go on doing just that, but the genius is not content to rest on his laurels, but seeks to go further, and innovate, breaking out of the mould that his previous style was coming to constitute for him. Instead of there just being a formula to which all his work conformed, he produces work which differs in some significant respect from what he had been doing, and this difference is not just a random one, but one which ex post facto we recognise as essentially right, even though it was not required by the previous specification of his style.

Thus, though the Gödelian formula is not a very interesting formula to enunciate, the Gödelian argument argues strongly for creativity, first in ruling out any reductionist account of the mind that would show us to be, au fond, necessarily unoriginal automata, and secondly by proving that the conceptual space exists in which it intelligible to speak of someone's being creative, without having to hold that he must be either acting at random or else in accordance with an antecedently specifiable rule.
 
I think Penrose's arguments on non-computable consciousness constructed around Gödel's theorem still stand up to scrutiny. And whether or not you buy into Orch-OR as a model of consciousness is another matter entirely.
I don't think it stands up to scrutiny. Godel is about certain kinds of arithmetic. And there is no reason to believe that humans are consistent or complete.

~~ Paul
 
I think Penrose's arguments on non-computable consciousness constructed around Gödel's theorem still stand up to scrutiny.
Can someone explain, in a concise way, how that works? Preferably with not to much math.
It is one of these arguments that keeps on coming back, and yet nobody has a good enough grasp on it to explain in simple terms. Admittedly this could be because i am a bit dense, but nobody seems to even try.

It almost looks as if this is a case of "the emperors new clothes", wouldn't that be ironic?

So anyone up for the challenge? Explain Penrose's Godelian argument on non computable consciousness in your own words for a non-mathematical audience, and win the prize of eternal gratitude.
 
I think the challenge is most people only have their prior prejudice to go on, and so cannot evaluate the argument or criticisms of it.

I've gone over the basic argument as presented by Lucas a few times, but haven't read Shadows of the Mind yet, so I don't want to present a weaker version of the argument.

As such it's best to go through treatments of it, starting with Lucas's varied papers (scroll down).

eta: Actually here's the list ->

I Gödelian Papers
  1. . ``Minds, Machines and Gödel''
  2. . ``Satan Stultified: A Rejoinder To Paul Benacerraf''
  3. ``Human and Machine Logic: a Rejoinder'', British Journal for the Philosophy of Science, 19, 1968, pp.155-156.
  4. ``Lucas Against Mechanism: A Rejoinder'', Philosophy, pp.149-151.
  5. ``This Gödel is Killing Me: a Rejoinder'', Philosophia, 6, no.1, March 1976, pp.145-148.
  6. Review of Judson Webb, Mechanism, Mentalism and Metamathematics, in The British Journal for the Philosophy of Science, 33, 1982, pp.441-444.
  7. . Criticisms and discussions of the Gödelian argument, based on a list which I distributed at the Turing Conference in Brighton some years ago, with some further additions. In the Proceedings, Machines and Thought, ed. Peter Millican and Andy Clark, Oxford, 1996, Robin Gandy gives a much earlier reference: Emil L. Post, `Absolutely Unsolvable Problems and Relatively Undecidable Propositions---Account of an Anticipation', in Martin Davis, (ed.), The Undecidable (New York: Raven Press, 1965), pp.340-435, esp. pp.417-24. Chalmers gives a more up-to-date list in his bibliography---which used to be http://www.artsci.wu...s.biblio.4.html but has now moved to Arizona: click here for pursuing his references I am grateful to various correspondents who have helped me to up-date the list given here, and welcome further items.
  8. ````Minds, Machines and Gödel: A Retrospect'', in P.J.R.Millican and A.Clark, eds., Machines and Thought: The Legacy of Alan Turing, Oxford, 1996, pp.103-124.
  9. . the text of Turn Over the Page a talk I gave on 25/5/96 at a BSPS conference in Oxford
  10. . the text of A (n Over) Simplified Exposition of Gödel's Theorem a talk I gave on 14/10/97 in King's College, London
  11. . The Implications of Gödel's Theorem the text of a talk I gave in Manchester in November 1996
  12. . The Implications of Gödel's Theorem the text of a talk I gave to the Sigma Club in London on February 26, 1998
  13. . A handout for the talk on Implications of Gödel's Theorem that I gave to the Sigma Club in London on February 26, 1998
  14. ``Commentary on Turing's ``Computing Machinery and Intelligence'', Forthcoming in The Turing Test Sourcebook to be published by Kluwer in 2005.
  15. ``A response to a paper by Professor Feferman, forthcoming in a volume edited by Richard Swinburne.
  16. An E-mail from Dr Jeffrey Ketland
  17. An E-mail from Mr Michael Harris
Note: Most critics concentrate their fire on ``Minds, Machines and Gödel'', without looking at the fuller statement in The Freedom of the Will, which includes the rebuttals first published in ``Satan Stultified''. In recent years it has been out of print. But under a new intitative by OUP, it is now available again. Single copies are printed on a one-off basis. I commend it to those who think there are holes in my original ``Minds, Machines and Gödel''

A full discussion of the issues raised is now available Etica e Politica, 2003.

A helpful discussion by P.Madden, aimed at an undergraduate readership at Warwick University, with recommendations for further reading, is now available The Lucas Debate and Related Issues
 
The output O would be a set of numbers, they represent a static state of the simulated person, that state does no work at all.
The audio that was spoken by the simulant can not be derived from the output O.
To know the words the simulant spoke, we need to record them through the interface while the program runs.

So, to know whether our simulant behaves consciuos we need to go through every step from state P to state O.
Do you agree with that, and if not why?
My argument is not to try to prove the simulant is conscious by experimentation - because after all, this is a TE - so it isn't possible to get the results! I am arguing that if the simulant is indeed conscious as the program runs, some very strange things follow. So by reductio ad absurdum, I think this is a strong argument that the simulant is not consciously aware.

I certainly will go into the rest of your post, but I think this is an essential point. It would be easier to know what you think about this before we continue.

OK - well if you bear the above in mind, the rest makes sense. You are thinking of a Turing test, but this is not like that.

David
 
Back
Top