Is the Brain analogous to a Digital Computer? [Discussion]

  • Thread starter Sciborg_S_Patel
  • Start date
Sciborg_S_Patel,

I think some of your quotes are getting to the heart of this problem - if consciousness is just some sort of complexity, then wouldn't we expect all sorts of systems to 'wake up'. For example:

Global financial networks.

Advanced maths software.

All kinds of biological processes.

Immune systems.

Plants.

Gut flora.

etc.


Bart would, I think, argue that only certain kinds of complex systems - those evolved to stay alive and reproduce (or copies inside a computer) - would be conscious, but that would certainly suggest that individual cells (which seem to become more complex by the day as research is done), might be conscious, and that evolution itself would be supremely conscious!

Given that, why would a materialist baulk at the concept of ID - evolution becoming conscious to perform its task more efficiently!

The problem is that as you follow that line of thinking, consciousness becomes just a name for particular types of system - rather like the word 'complex'. We can argue as to whether Pluto is a planet or a minor planet, or an asteroid, but the discussion is really just semantic. Trying to define/explain consciousness in such a semantic way, only makes real sense to people who think it 'doesn't exist' or 'isn't a big deal' or 'is a user illusion'.

To be fair to materialists, I think they find themselves in a bind here. They start off defending what seems like plain old common sense, and end up almost denying that we have any inner life at all, or attributing potential consciousness to all sorts of objects - like oak trees - that seems far removed from the common sense ideals that they started out with!

David
 
Last edited:
Until we figure out how consciousness actually works there's going to be a lot of brainstorming. I don't think we need to get too attached to any particular storm. These are just avenues people are exploring while trying to figure it out. Once consciousness of one type is licked - if ever - it should be easier to figure out what particularities exist in that type which may give a hint of what to look for in others.
 
No more so than preserving the number which was to be factorised! I dealt with this point already!
You dealt with a point you chose, not the one i tried to make.
The difference between your factorization program and P is clear.
You do not have to preserve the number to be factorized. We know that every number larger the one, that in itself is not a prime, can be factorized.
So we know that N can be factorized, no matter what number N is, so we do not have to preserve that. That is what i understand to mean eternal.
It is based on relations between numbers that are universal and eternal. These relations are not going to change and can be discovered and rediscovered for eternity.
Is the Fibonacci series not 'discovered' by the sunflower? Did the cicade not 'discover prime numbers? Crystals?, waves?...Nature discovered these relations long time ago, and will remember or rediscover when we are long gone.

This in contrast with P, of which we can not even say what it exactly does if we look at it mathematically. Therefore we can not even tell when it ends, it keeps on going as long as we want to keep it running, we arbitrarily chose to run it for half an hour because you think it proves a point.

Because we can not derive from it's code what it actually does, we can not prove it does the same unknowable thing for any slightly different form of P, in fact we must assume that P is unique in doing what it does.

P will, of course, exist of a gazillion elements that are in itself performing as eternal math, but the unique way they start out acting on each other is not a (re)discoverable set of relations.
Of course the initial state of P can be recovered - it can be copied before the simulation is started!
Is that not preservation? how is this different preserving the whole computer?
In the same sense that you have to preserve Pythagoras' theorem if you want to check it 5 billion years hence.
No, because Pythagoras' theorem is based on the relation between the sides of right angled triangles. We, or the cockroaches in a few million years, will always be able to (re)discover it. P, on the other hand is lost for ever.
Yes P is bigger and messier, but is this really relevant - remember we are talking about a TE!
It is fundamentally different on several levels, so yes it matters a great deal.
Yes - and this is the whole point! You are not only saying that the simulation will mimic the behaviour of the brain, you are claiming that the relevant emotions will actually be felt! It is the idea that running the simulation on the computer will actually generate experiences that we disagree on!
But this TE assumes that emotions and experiences are behaviour of the brain. You not agreeing with that is well noted and respected, but does not follow from this TE.
Note that if the simulation was of a mini-universe, there would be no expectation that it would do anything but compute the evolution of the structure!
And it is the evolution from a brain state to the next, and the next ..., that results in conscious behaviour.
We aren't really discussing memories, but actual experiences - live.
Not so sure about the difference anymore, thanks to this TE and it's various previous incarnations.
I don't see what is elusive about the notion of an eternal truth - isn't 2+2=4 eternal? Isn't any true arithmetic statement eternal? I was trying to show you how a computer program can be turned into an arithmetical statement (albeit rather large :) but you don't want to pursue that - we probably have to differ.

David
The problem is not that i won't pursue this issue, it is all i have been doing so far. the problem is that we do not agree what it means.
 
You dealt with a point you chose, not the one i tried to make.
The difference between your factorization program and P is clear.
You do not have to preserve the number to be factorized. We know that every number larger the one, that in itself is not a prime, can be factorized.
So we know that N can be factorized, no matter what number N is, so we do not have to preserve that. That is what i understand to mean eternal.
Well I can't see much difference - but please remember that there are well established ways of reducing a computer program to an ARITHMETICAL statement of extreme complexity. One way is to convert the program to a Turing machine program and then follow Turing's procedure. The end result will be a humongous arithmetical expresion = O.

See Appendix A of Roger Penrose's "Shadows of the Mind".

Furthermore, you have to ask yourself if it is reasonable to claim that the consciousness of the symbiont depends in any way on the fact that it might be hard to remember the simulation program over time!

It is based on relations between numbers that are universal and eternal. These relations are not going to change and can be discovered and rediscovered for eternity.

Is the Fibonacci series not 'discovered' by the sunflower? Did the cicade not 'discover prime numbers? Crystals?, waves?...Nature discovered these relations long time ago, and will remember or rediscover when we are long gone.
Do you think that any ARITHMETICAL statement cannot be rediscovered?

This in contrast with P, of which we can not even say what it exactly does if we look at it mathematically. Therefore we can not even tell when it ends, it keeps on going as long as we want to keep it running, we arbitrarily chose to run it for half an hour because you think it proves a point.
Well to the extent that it can be reduced to an arithmetic statement, we can tell what it does, but not exactly what it represents. Again though, I have to ask you if such a distinction could possibly explain why one thing is conscious and another not!

Because we can not derive from it's code what it actually does, we can not prove it does the same unknowable thing for any slightly different form of P, in fact we must assume that P is unique in doing what it does.

P will, of course, exist of a gazillion elements that are in itself performing as eternal math, but the unique way they start out acting on each other is not a (re)discoverable set of relations.Is that not preservation? how is this different preserving the whole computer?
Well the whole computer is made of things that might corrode over time, but the entire contents of a computer can be concatenated to give a single (rather large!) integer N. Does your theory of consciousness really depend on the fact that N is rather large?

And it is the evolution from a brain state to the next, and the next ..., that results in conscious behaviour.

Not so sure about the difference anymore, thanks to this TE and it's various previous incarnations.
The problem is not that i won't pursue this issue, it is all i have been doing so far. the problem is that we do not agree what it means.

Clearly we don't agree, but the question is whether we can make any further progress. I think whenever one thinks of a TE, it is vital to decide which aspects are important. I mean, does the sheer size of N (as defined above) make the difference between an inert process and one that is conscious?

It might be well worth reading the link put up by Sciborg_s_Patel. Clearly even materialists have trouble with consciousness that is independent of its substrate:

http://rationallyspeaking.blogspot.com/2012/11/consciousness-and-internet.html

David
 
You Can't Argue with a Zombie

I started with the usual sort of brain-replacement yarn. Your neurons are replaced one-by-one with silicon devices. That sort of thing. Young zombies-in-training assume that nothing fundamental will have changed if they are turned to silicon.

We then transferred our brains into software. Each neuron was now replaced by a software expression and they all connected together functionally in the same way as they did when they were mushy.

The zombies still felt at one with this proposed zombie-on-a-disk. It is worth pausing for a moment and noting that accepting one's ontological equivalence to some data on a disk does not necessarily banish the demons of vitalism. Zombies might still imagine their data interacting with biological humans (as we see in the Star Trek character "Data"). They might still turn to the natural world for confirmation, relying on that old ritual of vitalism, the Turing Test.

Harder core zombies are ready to leave all that behind and imagine living on a disk in which they only interact with other minds and environmental elements that also exist solely as software. It is here that we must ask a question that seems obvious to me, but seems to shock zombies: What makes this software exist? What makes the computer that it runs on exist?

There can be only one proper basis to judge the existence of computers and software. We should be able to confirm their existence empirically, using the same scientific method we use to study the rest of the natural world. As it turns out, we cannot do that, for reasons that I will make clear later in this paper. We are the only measure of the existence of computers. So the assertion that computers and software exist is a stealthy conveyor of rampant vitalism and mystical dualism.
 
Well I can't see much difference - but please remember that there are well established ways of reducing a computer program to an ARITHMETICAL statement of extreme complexity. One way is to convert the program to a Turing machine program and then follow Turing's procedure. The end result will be a humongous arithmetical expresion = O.
How is that relevant?
Furthermore, you have to ask yourself if it is reasonable to claim that the consciousness of the symbiont depends in any way on the fact that it might be hard to remember the simulation program over time!
No, but you are misrepresenting what i said.
We were talking about how something is an eternal theorem. Having to conserve P means P it is not that, do you agree?
Do you think that any ARITHMETICAL statement cannot be rediscovered?
Well, there is a difference. No matter what, Pythagoras' theorem is going to be rediscovered because it describes the relations between the sides of right-angled triangles, it are these relations that give it that eternal quality.

P on the other hand describes relations that accumulated through random processes, these can not be rediscovered in the same way.
We might say they are rediscoverable through chance, but in a finite universe that chance will be almost zero.

This in contrast with P, of which we can not even say what it exactly does if we look at it mathematically. Therefore we can not even tell when it ends, it keeps on going as long as we want to keep it running, we arbitrarily chose to run it for half an hour because you think it proves a point.
Well to the extent that it can be reduced to an arithmetic statement, we can tell what it does, but not exactly what it represents. Again though, I have to ask you if such a distinction could possibly explain why one thing is conscious and another not!
It plays a part IMO, but that is not the point.
If we can not say anything more about P than it being an arithmetic statement of arbitrary length, then it can certainly have no meaning.
Therefore it can also not be an abstract fact that always was and always will be.

Well the whole computer is made of things that might corrode over time, but the entire contents of a computer can be concatenated to give a single (rather large!) integer N. Does your theory of consciousness really depend on the fact that N is rather large?
No, what gives you that idea? We never got to any theory of consciousness, that is not even the scope of this TE.
Clearly we don't agree, but the question is whether we can make any further progress. I think whenever one thinks of a TE, it is vital to decide which aspects are important. I mean, does the sheer size of N (as defined above) make the difference between an inert process and one that is conscious?
We are not defining the size of P. We are defining what sort of calculation P actually is.
That is important since your whole argument is based on it.

I certainly think that all calculation is not created equal, P is definitely not some timeless theorem.

We could make further process. But in deciding what is essential or not, it is important to do this with an open mind, and not with the goal to arrive at a preconceived conclusion.
 
One Half A Manifesto, by Jaron Lanier

"Cybernetic eschatology shares with some of history's worst ideologies a doctrine of historical predestination. There is nothing more gray, stultifying, or dreary than a life lived inside the confines of a theory. Let us hope that the cybernetic totalists learn humility before their day
in the sun arrives."


It's interesting how many different facets of materialist evangelism get called for being new versions of religion. Sheldrake makes the same criticisms of making metaphor into catechism in the debate with Blakemore but one expects a student of Life to make that distinction. As a member of the digital pioneers, specifically one involved with virtual reality, Lanier's criticisms carry a certain edge all their own:

"I hope no one will think I'm equating Cybernetics and what I'm calling Cybernetic Totalism. The distance between recognizing a great metaphor and treating it as the only metaphor is the same as the distance between humble science and dogmatic religion.

Here is a partial roster of the component beliefs of cybernetic totalism:

1) That cybernetic patterns of information provide the ultimate and best way to understand reality.

2) That people are no more than cybernetic patterns.

3) That subjective experience either doesn't exist, or is unimportant because it is some sort of ambient or peripheral effect.

4) That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all creativity and culture.

5) That qualitative as well as quantitative aspects of information systems will be accelerated by Moore's Law.

And finally, the most dramatic:

6) That biology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly, they will overwhelm all the other cybernetic processes, like people, and will fundamentally change the nature of what's going on in the familiar neighborhood of Earth at some moment when a new "criticality" is achieved- maybe in about the year
2020. To be a human after that moment will be either impossible or something very different than we now can know."
 
And finally, the most dramatic:

6) That biology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly, they will overwhelm all the other cybernetic processes, like people, and will fundamentally change the nature of what's going on in the familiar neighborhood of Earth at some moment when a new "criticality" is achieved- maybe in about the year
2020. To be a human after that moment will be either impossible or something very different than we now can know."

This is what makes me smile, because while computer power has become much more plentiful, and memories much larger than when I was a student, nobody has come up with a decent idea of how to create an entity that is conscious. Back in Turing's time, people thought Artificial Intelligence (AI) would arrive as soon as computers became a bit more powerful. Machines back then ran at clock rates that were much less than 1MHz - thousands of times slower than a PC. We don't just have PC'c, we have super computers with thousands of PC chips operating in parallel, but we still don't have AI, or AC (artificial consciousness). The crazy thing is that large numbers of people - like Bart - seem to believe that computers must be able to be consciousness, given the right program, despite the fact that:

1) After enormous research, nobody claims to have produced such a system.

2) There is no way to estimate just how much more computer power might be required (except perhaps to just base the value on numbers of neurons or synapses in the brain).

3) Computers simply don't do anything other than compute (and generate a little heat). Anything else that they do, is achieved by sending signals to other hardware - such as the sound card, or a spacecraft's engine. Despite this, there is the belief that computers running certain programs would be conscious in addition to performing their computations!

How many more decades of exponential increase in computer power have to go by before this myth can be buried?

David
 
Last edited:
Well I can't see much difference - but please remember that there are well established ways of reducing a computer program to an ARITHMETICAL statement of extreme complexity. One way is to convert the program to a Turing machine program and then follow Turing's procedure. The end result will be a humongous arithmetical expresion = O.

How is that relevant?

Well sorry, if you need to ask this question, you haven't been following my argument at all! The relevance is that it means that you are ultimately attributing consciousness to a very large arithmetical truth!

David
 
Well sorry, if you need to ask this question, you haven't been following my argument at all! The relevance is that it means that you are ultimately attributing consciousness to a very large arithmetical truth!

David
Well, if you do not know why i think this is irrelevant, you have completely ignored all the arguments i brought up against the eternal quality of P.
 
David Chalmers: "Simulation and the Singularity"

Chalmers makes a good point that simulated evolution is probably the best possibly path to conscious AIs*, assuming you believe in such things. (I don't, for reasons given by Lanier above.)

Nevertheless the idea Chalmers' posits - evolution in a VR space - relates to the discussion between Bart V and David Bailey. If nothing else anyone working on their own sci-fi story might find the presentation to be filled with inspirational ideas. :)

*Note that this depends at least partially reliant on Chalmers' division of the Hard and Easy problems of consciousness. As such, one might want to consider Lowe's There is No Easy Problem of Consciousness.
 
Last edited by a moderator:
Amusing retrospective:

Artificial Intelligence Meets Natural Stupidity (1981)

...By now, ‘GPS’ is a colorless term denoting a particularly stupid program to solve puzzles. But it originally meant ‘General Problem Solver’, which caused everybody a lot of needless excitement and distraction...

...As AI progresses (at least in terms of money spent), this malady gets worse. We have lived so long with the conviction that robots are possible, even just around the corner, that we can’t help hastening their arrival with magic incantations. Winograd...explored some of the complexity of language in sophisticated detail; and now everyone takes ‘natural-language interfaces’ for granted, though none has been written. Charniak...pointed out some approaches to understanding stories, and now the OWL interpreter includes a ‘story-understanding module’. (And, God help us, a top-level ‘ego loop.’)...

Most AI workers are responsible people who are aware of the pitfalls of a difficult field and produce good work in spite of them. However, to say anything good about anyone is beyond the scope of this paper.
 
I've always wondered what accounts for functional people with severe hydrocephalus in the purely mechanistic/functionalist/complexity paradigm. I've searched and found only vague innuendo that, "Wow! The brain is really good at reassigning responsibility!" Well yes, of course, to a degree. Pribram's research into the holonomic brain (amongst other neurological research) made that rather obvious a long time ago, but that insinuates a whole number of other mysteries. It basically points to the brain being A) inherently non-local, B) able to function absent of "complexity" or a "critical number of connections and/or parts".

One could argue that the brain is a "quantum computer", but that doesn't do away with the issue because the deeper implications of quantum theory are far from understood, and many people still maintain that quantum effects play no significant role in consciousness. Certain interpretations of quantum theory open up all kinds of provocative avenues regarding consciousness that go well beyond any and all notions of classical/traditional machine/computational reality. If we wish to redefine machines and computers to fit with whatever new paradigm emerges, if/when it arrives, well then the computational analogy no longer holds because one is no longer talking about remotely the same thing as before. I found it funny that Sean Carroll at the NDE debate basically brushed the mystery of quantum theory foundations under the rug, saying "We know so much more than we did 70-80 years ago. We therefore understand QM at a foundational level. No more mystery here". That was such a load. Yeah, the equations have been hammered out and we know how to apply it to technology, but we have little idea how to yet apply it to the human being, or on a cosmological scale for that matter. That statement of his was highly misleading and I found it highly annoying, as I see it as an obligation of any scientist regardless of persuasion to be honest with a lay audience.

Yes the brain has machine-like aspects, but going trying to cram the mind into a computational (i.e. formal) framework is like insinuating all of reality is Newtonian, when in fact that classical paradigm breaks down in extreme situations (i.e. at speed of light, at quantum level, at cosmological scales, etc.). The history of science reveals an onion like quality to almost every system in the Universe, yet reductionists/mechanists (I prefer these terms to "materialist", because matter is as mysterious as an immaterial thing when you really think about it in depth) keep insisting "No the mind is the exception." Well, I doubt that quite a bit. We thought we had it all figured out at the beginning of the 20th century as well. You hear a number of scientists and orthodox fundamentalists make those grandiose statements quite often, and history demonstrates again and again the silliness of such a stance. History also shows that we habitually compare ourselves to whatever technology is most in vogue at any given time. Clocks, steam engines, now computers.

http://www.rifters.com/real/articles/Science_No-Brain.pdf
 
Last edited:
The End of Human Specialness

Decay in the belief in self is driven not by technology, but by the culture of technologists, especially the recent designs of antihuman software like Facebook, which almost everyone is suddenly living their lives through. Such designs suggest that information is a free-standing substance, independent of human experience or perspective. As a result, the role of each human shifts from being a "special" entity to being a component of an emerging global computer.


This shift has palpable consequences. For one thing, power accrues to the proprietors of the central nodes on the global computer. There are various types of central nodes, including the servers of Silicon Valley companies devoted to searching or social-networking, computers that empower impenetrable high finance (like hedge funds and high-frequency trading), and state-security computers. Those who are not themselves close to a central node find their own cognition gradually turning into a commodity. Someone who used to be able to sell commercial illustrations now must give them away, for instance, so that a third party can make money from advertising. Students turn to Wikipedia, and often don't notice that the acceptance of a single, collective version of reality has the effect of eroding their personhood.
 
That is a superb critique of AI (although perhaps the author didn't think of it that way!

Even though it is a little dated, I would recommend anyone who thinks AI is feasible, or already exists in some sense, to read that paper. As some of you will know, it was the failure of AI that first set me wondering about materialism.

David
 
I think I forgot to post this paper by Searle:

Is the Brain a Digital Computer?

1. On the standard textbook definition, computation is defined syntactically in terms of symbol manipulation.

2. But syntax and symbols are not defined in terms of physics. Though symbol tokens are always physical tokens, "symbol" and "same symbol" are not defined in terms of physical features. Syntax, in short, is not intrinsic to physics.

3. This has the consequence that computation is not discovered in the physics, it is assigned to it. Certain physical phenomena are assigned or used or programmed or interpreted syntactically. Syntax and symbols are observer relative.

4. It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim "The brain is a digital computer" is false. Rather it does not get up to the level of falsehood. It does not have a clear sense. You will have misunderstood my account if you think that I am arguing that it is simply false that the brain is a digital computer. The question "Is the brain a digital computer?" is as ill defined as the questions "Is it an abacus?", "Is it a book?", or "Is it a set of symbols?", "Is it a set of mathematical formulae?"

5. Some physical systems facilitate the computational use much better than others. That is why we build, program, and use them. In such cases we are the homunculus in the system interpreting the physics in both syntactical and semantic terms.

6. But the causal explanations we then give do not cite causal properties different from the physics of the implementation and the intentionality of the homunculus.

7. The standard, though tacit, way out of this is to commit the homunculus fallacy. The humunculus fallacy is endemic to computational models of cognition and cannot be removed by the standard recursive decomposition arguments. They are addressed to a different question.

8. We cannot avoid the foregoing results by supposing that the brain is doing "information processing". The brain, as far as its intrinsic operations are concerned, does no information processing. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.
 
Point 8 suddenly jumps to "information processing." What is his definition of information processing that excludes such things as adding two numbers?

~~ Paul

It's been awhile since I read the paper. Will have to get back to you on this.

=-=-=

Can't recall if this was posted here:

The Mystery of Go, the Ancient Game That Computers Still Can’t Win

The trouble is that identifying Go moves that deserve attention is often a mysterious process. “You’ll be looking at the board and just know,” Redmond told me, as we stood in front of the projector screen watching Crazy Stone take back Nomitan’s initial lead. “It’s something subconscious, that you train through years and years of playing. I’ll see a move and be sure it’s the right one, but won’t be able to tell you exactly how I know. I just see it.”

Similarly inscrutable is the process of evaluating a particular board configuration. In chess, there are some obvious rules. If, ten moves down the line, one side is missing a knight and the other isn’t, generally it’s clear who’s ahead. Not so in Go, where there’s no easy way to prove why Black’s moyo is large but vulnerable, and White has bad aji. Such things may be obvious to an expert player, but without a good way to quantify them, they will be invisible to computers. And if there’s no good way to evaluate intermediate game positions, an alpha-beta algorithm that engages in global board searches has no way of deciding which move leads to the best outcome.

According to University of Sydney cognitive scientist and complex systems theorist Michael Harré, professional Go players behave in ways that are incredibly hard to predict. In a recent study, Harré analyzed Go players of various strengths, focusing on the predictability of their moves given a specific local configuration of stones. “The result was totally unexpected,” he says. “Moves became steadily more predictable until players reached near-professional level. But at that point, moves started getting less predictable, and we don’t know why. Our best guess is that information from the rest of the board started influencing decision-making in a unique way.”
 
1) Yale Compsci Prof Gelernter offers a refutation to the idea of mind as computer program in "The Closing of the Scientific Mind".

There is a still deeper problem with computationalism. Mainstream computationalists treat the mind as if its purpose were merely to act and not to be. But the mind is for doing and being. Computers are machines, and idle machines are wasted. That is not true of your mind. Your mind might be wholly quiet, doing (“computing”) nothing; yet you might be feeling miserable or exalted, or awestruck by the beauty of the object in front of you, or inspired or resolute—and such moments might be the center of your mental life. Or you might merely be conscious. “I cannot see what flowers are at my feet,/Nor what soft incense hangs upon the boughs….Darkling I listen….” That was drafted by the computer known as John Keats.

Emotions in particular are not actions but states of being. And emotions are central to your mental life and can shape your behavior by allowing you to compare alternatives to determine which feels best. Jane Austen, Persuasion: “He walked to the window to recollect himself, and feel how he ought to behave.” Henry James, The Ambassadors: The heroine tells the hero, “no one feels so much as you. No—not any one.” She means that no one is so precise, penetrating, and sympathetic an observer.

Computationalists cannot account for emotion. It fits as badly as consciousness into the mind-as-software scheme.

2) The End Is A.I.: The Singularity Is Sci-Fi's Faith-Based Initiative

In 1993, Vernor Vinge wrote a paper about the end of the world.

“Within thirty years, we will have the technological means to create superhuman intelligence,” writes Vinge. “Shortly after, the human era will be ended.”

At the time, Vinge was something of a double threat—a computer scientist at San Diego State University, as well as an acclaimed science fiction writer (though his Hugo awards would come later). That last part is important. Because the paper, written for a NASA symposium, reads like a brilliant mix of riveting science fiction, and secular prophecy.

“The Coming Technological Singularity” outlines a reckoning to come, when ever-faster gains in processing power will blow right past artificial intelligence—systems with human-like cognition and sentience—and give rise to hyper-intelligent machines.

“From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in ‘a million years’ (if ever) will likely happen in the next century.”

This is what Vinge dubbed the Singularity, a point in our collective future that will be utterly, and unknowably transformed by technology’s rapid pace. The Singularity—which Vinge explores in depth, but humbly sources back to the pioneering mathematician John von Neumann—is the futurist’s equivalent of a black hole, describing the way in which progress itself will continue to speed up, moving more quickly the closer it gets to the dawn of machine super intelligence. Once artificial intelligence (AI) is accomplished, the global transformation could take years, or mere hours. Notably, Vinge cites a SF short story by Greg Bear as an example of the latter outcome, like a prophet bolstering his argument for the coming end-times with passages from scripture.
 
Is the Brain analogous to a Digital Computer?

I think it would be more to the point to ask "Is the brain a conscious computer?"

The answer is no. A computer just executes a program, meaning is irrelevant at the level of the program. If the brain is a conscious computer then there is no reason to believe one thought justifies another so there is no justification to believe anything including that the brain is a conscious computer. Also, Alan Turing believed in the evidence that humans have ESP and that a computer would not have ESP.

http://sites.google.com/site/chs4o8pt/skeptical_fallacies#skeptical_fallacies_computer
Feser shows that materialism cannot explain our ability to reason:
  • Materialism says that thinking is ultimately a mechanical process. Like a computer running a program, thought is a transition from one physical state to another caused by known laws of physics.
  • Such a transition occurs due to physical laws not due to any inherent meaning in the physical states.
  • But a "thought can serve as a rational justification for another only by virtue of" it's "meaning"....
  • "If materialism is true, ... there is nothing about our thought processes that can make one thought a rational justification for another".
  • "If materialism is true none of our thoughts is ever rationally justified."
  • "This includes the thoughts of materialists themselves."
  • "If materialism is true it cannot be rationally justified", materialism "undermines itself".
If you believe the brain is a conscious computer, it is irrational of you to believe anything. If you believe anything, you must believe materialism is false, the brain is not a conscious computer, and that the mind is not produced by the brain.
...
Turing believed in the evidence for ESP and he felt a computer couldn't reproduce it.
...
I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.

Also, all of the empirical evidence for the afterlife and all the arguments that consciousness is not produced by the brain contradict the belief that the brain is a conscious computer.
http://www.skeptiko-forum.com/threads/a-review-of-is-there-an-afterlife.953/#post-24135
 
Last edited:
Back
Top