billw
New
Did the computer generate it, or did the program written by the human generate it? Or does it not matter here?There is at least one thing that computers generate that we don't understand: neural networks.
~~ Paul
Cheers,
Bill
Did the computer generate it, or did the program written by the human generate it? Or does it not matter here?There is at least one thing that computers generate that we don't understand: neural networks.
~~ Paul
I'm 99.999 percent convinced that brains (and minds, brain generated or otherwise) can remote view. Can a digital computer?
Cheers,
Bill
I think this enterprise is designed to show that:
a) A computer simulation of a brain can't be equivalent to the real brain in operation using Reductio Ad Absurdum.
b) Argue that the mind can't therefore be purely physical.
Once we establish that the mind can't be purely physical, all ψ phenomena - including remote viewing - become possibilities, but not in the simulation!
David
I'm not saying remote viewing is supernatural (in fact, I think it is "natural"). I think we're on the same page. I'm just saying that we designed computers, networks, and the communication protocols, so we won't learn anything about remote viewing and its unknown mechanism by making analogies to computers.Can't computers "remote view" via Wifi? Or do we discount that because we know the mechanism? Wifi would look "supernatural" to someone from 200 years ago...
The machine generated it, running a program written by a human. But then you cannot look at the neural network and understand how it works.Did the computer generate it, or did the program written by the human generate it? Or does it not matter here?
Right, but we knew enough about the way central nervous systems work to write artificial neural network programs that simulate them.The machine generated it, running a program written by a human. But then you cannot look at the neural network and understand how it works.
~~ Paul
Yes, but nonetheless we don't know 100% how computers work. Or, I should say, we don't know 100% how software works.Right, but we knew enough about the way central nervous systems work to write artificial neural network programs that simulate them.
You're just saying the output is not deterministic under those conditions, and we know when that is the case. I'd not categorize that as "not knowing how they work," but I understand your point.Yes, but nonetheless we don't know 100% how computers work. Or, I should say, we don't know 100% how software works.
Also, if true random numbers are involved, then computers can produce results whose history cannot be traced.
~~ Paul
Goodness - this is a thought experiment (TE) not a proposal to do it for real. Besides we would need ethical approval to do it for real :)
You're just saying the output is not deterministic under those conditions, and we know when that is the case. I'd not categorize that as "not knowing how they work," but I understand your point.
Cheers,
Bill
Yeah, I'd always assumed the black box of neural network results was more about complexity than a genuine mystery.
I don't claim expertise though, and am happy to be corrected on this.
Yes, but nonetheless we don't know 100% how computers work. Or, I should say, we don't know 100% how software works.
The point is, Gedanken experiments (TE) have their place in science - think of the diagrams in books on Special Relativity of people flashing messages to each other from trains or rocket ships.Its a complexity issue; how do you label the artificial neurons and how do you show them? The game "Democracy 3" is effectively based around artificial neurons with some window dressing, but a toy AI is going to have 20-100 of the things and an AI trying to perform complex tasks using artificial neurons alone will have a lot more. If you poke around with the dev kits for the "Creatures" series of games, you can see that even a simple model of a neural net-based brain and genetic-based lifeform is a nightmare to navigate.
Let's take the whole body - so she had better not be terminally ill - otherwise her simulation might just utter "Ugh!"
OK - lets provisionally go with that. So there will be input, I, but it can perfectly well be null.
So at this point the 'theorem' can be
P+null +> O
Where O is some output - which could be audio since we have her whole body digitised - which describes her mental state.
I think Penrose's arguments on non-computable consciousness constructed around Gödel's theorem still stand up to scrutiny. And whether or not you buy into Orch-OR as a model of consciousness is another matter entirely.
The critic may still complain that doing arithmetic in non-non-standard models is still a pretty boring activity; but that is not the point. Gödel's theorem shows that there is conceptual room for creativity, by allowing that to be reasonable is not necessarily to be in accordance with a rule. We can see how it works out with the style of great creative artists. Titian, or Bach, or Shakespeare, develop a style which is peculiarly their own, and which we can learn to recognise. But it is not static. Up to a point they can produce works that are variations on a theme, but beyond that point we begin to criticize, and say that they are painting, composing, or writing, according to a formula; it is the mark of second-rate artists to be content to go on doing just that, but the genius is not content to rest on his laurels, but seeks to go further, and innovate, breaking out of the mould that his previous style was coming to constitute for him. Instead of there just being a formula to which all his work conformed, he produces work which differs in some significant respect from what he had been doing, and this difference is not just a random one, but one which ex post facto we recognise as essentially right, even though it was not required by the previous specification of his style.
Thus, though the Gödelian formula is not a very interesting formula to enunciate, the Gödelian argument argues strongly for creativity, first in ruling out any reductionist account of the mind that would show us to be, au fond, necessarily unoriginal automata, and secondly by proving that the conceptual space exists in which it intelligible to speak of someone's being creative, without having to hold that he must be either acting at random or else in accordance with an antecedently specifiable rule.
I meant that we don't know how 100% of software works, not that we don't know 100% of how any software works. I think that was clear.So 1 + 1 = 7?
I don't think it stands up to scrutiny. Godel is about certain kinds of arithmetic. And there is no reason to believe that humans are consistent or complete.I think Penrose's arguments on non-computable consciousness constructed around Gödel's theorem still stand up to scrutiny. And whether or not you buy into Orch-OR as a model of consciousness is another matter entirely.
Can someone explain, in a concise way, how that works? Preferably with not to much math.I think Penrose's arguments on non-computable consciousness constructed around Gödel's theorem still stand up to scrutiny.
I Gödelian Papers
Note: Most critics concentrate their fire on ``Minds, Machines and Gödel'', without looking at the fuller statement in The Freedom of the Will, which includes the rebuttals first published in ``Satan Stultified''. In recent years it has been out of print. But under a new intitative by OUP, it is now available again. Single copies are printed on a one-off basis. I commend it to those who think there are holes in my original ``Minds, Machines and Gödel''
- . ``Minds, Machines and Gödel''
- . ``Satan Stultified: A Rejoinder To Paul Benacerraf''
- ``Human and Machine Logic: a Rejoinder'', British Journal for the Philosophy of Science, 19, 1968, pp.155-156.
- ``Lucas Against Mechanism: A Rejoinder'', Philosophy, pp.149-151.
- ``This Gödel is Killing Me: a Rejoinder'', Philosophia, 6, no.1, March 1976, pp.145-148.
- Review of Judson Webb, Mechanism, Mentalism and Metamathematics, in The British Journal for the Philosophy of Science, 33, 1982, pp.441-444.
- . Criticisms and discussions of the Gödelian argument, based on a list which I distributed at the Turing Conference in Brighton some years ago, with some further additions. In the Proceedings, Machines and Thought, ed. Peter Millican and Andy Clark, Oxford, 1996, Robin Gandy gives a much earlier reference: Emil L. Post, `Absolutely Unsolvable Problems and Relatively Undecidable Propositions---Account of an Anticipation', in Martin Davis, (ed.), The Undecidable (New York: Raven Press, 1965), pp.340-435, esp. pp.417-24. Chalmers gives a more up-to-date list in his bibliography---which used to be http://www.artsci.wu...s.biblio.4.html but has now moved to Arizona: click here for pursuing his references I am grateful to various correspondents who have helped me to up-date the list given here, and welcome further items.
- ````Minds, Machines and Gödel: A Retrospect'', in P.J.R.Millican and A.Clark, eds., Machines and Thought: The Legacy of Alan Turing, Oxford, 1996, pp.103-124.
- . the text of Turn Over the Page a talk I gave on 25/5/96 at a BSPS conference in Oxford
- . the text of A (n Over) Simplified Exposition of Gödel's Theorem a talk I gave on 14/10/97 in King's College, London
- . The Implications of Gödel's Theorem the text of a talk I gave in Manchester in November 1996
- . The Implications of Gödel's Theorem the text of a talk I gave to the Sigma Club in London on February 26, 1998
- . A handout for the talk on Implications of Gödel's Theorem that I gave to the Sigma Club in London on February 26, 1998
- ``Commentary on Turing's ``Computing Machinery and Intelligence'', Forthcoming in The Turing Test Sourcebook to be published by Kluwer in 2005.
- ``A response to a paper by Professor Feferman, forthcoming in a volume edited by Richard Swinburne.
- An E-mail from Dr Jeffrey Ketland
- An E-mail from Mr Michael Harris
A full discussion of the issues raised is now available Etica e Politica, 2003.
A helpful discussion by P.Madden, aimed at an undergraduate readership at Warwick University, with recommendations for further reading, is now available The Lucas Debate and Related Issues
My argument is not to try to prove the simulant is conscious by experimentation - because after all, this is a TE - so it isn't possible to get the results! I am arguing that if the simulant is indeed conscious as the program runs, some very strange things follow. So by reductio ad absurdum, I think this is a strong argument that the simulant is not consciously aware.The output O would be a set of numbers, they represent a static state of the simulated person, that state does no work at all.
The audio that was spoken by the simulant can not be derived from the output O.
To know the words the simulant spoke, we need to record them through the interface while the program runs.
So, to know whether our simulant behaves consciuos we need to go through every step from state P to state O.
Do you agree with that, and if not why?
I certainly will go into the rest of your post, but I think this is an essential point. It would be easier to know what you think about this before we continue.