217. DR. GARY MARCUS SANDBAGGED BY NEAR-DEATH EXPERIENCE SCIENCE QUESTIONS

You're not just asserting it, you are arguing it.

So rather than argue... why not give us a perspective of what you would consider various possible scenarios of 'real' meaning for you (as opposed to 'real meaning' if you like)? What would qualify? In this type of thing there is always genuine interest and no right or wrong answer. ;;/?
Again, I'm asking Alex what he means by "'real' meaning". I've already told you that benefiting my kin gives my life meaning, for example. If you don't like the answer, 'cause "biological robot", I don't know what else to tell you.

You may have the last word here.
 
Again, I'm asking Alex what he means by "'real' meaning". I've already told you that benefiting my kin gives my life meaning, for example. If you don't like the answer, 'cause "biological robot", I don't know what else to tell you.

I never said anything about 'biological robots'. This is a straw man coming from some deep past argument with someone else. Please do not play that personal conflict out on me. I just gave you my description of meaning, because you asked... it was not meant as a challenge.

As a minor note, family gives (maybe fulfilling) purpose, not meaning... those are two differing things. - But that is not meant to be critical of this as a guiding rationale. I love that point. ;;/? That is called, having a discussion. :)
 
Last edited:
so, you don't agree with the atheistic/Richard Dawkins biological robot bullshit? great... I'm with you.

where does the meaning come from if not from purely biological processes?

are you saying you believe in god/God?
"Biological robot" is a sweeping generalization. I am certainly a biological creature, and robots are artifacts patterned after biological creatures; otherwise, I don't know what the words are supposed to mean.

I also don't understand what you mean by "meaning". Again, I love my children and other kin, and tribal/ideological relationships also motivate me. The former emotions are clearly enough genetic, and I call the latter "memetic" after the fashion of Dawkins, but I don't accept anyone's bullshit. Labeling something "bullshit" doesn't mean anything to me.

Theologically, I call myself a pantheist, and in Dawkin's lexicon, I'm a "sexed up atheist", but I don't know what these labels have to do with near death experiences or any meaning derived from immaterial/immortal souls or how a disembodied soul can experience signals transmitted by photons without something like an eye or why human beings have eyes if their immaterial souls can see without eyes. Consciousness is a mystery to me, but the mystery doesn't address these questions.

I can imagine unconscious robots behaving much as human beings behave, but I can only imagine them, because I haven't experienced them outside of science fiction. If John Searle is right, we may never create robots mimicking much human behavior without somehow incorporating a stuff of consciousness not reducible functionally to classical, information processing machinery. I neither believe nor disbelieve in the possibility of these robots, because consciousness is a mystery to me. I believe that intelligence can be reduced information processing, but I distinguish intelligence from consciousness.

Whether or not I'm a biological robot, I don't understand how my consciousness, including memories of sense impressions of a material world, could survive the disintegration of my material body. I have some idea of how my body collects and records these impressions and generates a symbolic stream somehow constituting my consciousness, but the idea involves the energetic organization of matter. Why do I need neurons if my stream of consciousness doesn't require them, and why do I have them if I don't need them?
 
Last edited:
"Biological robot" is a sweeping generalization. I am certainly a biological creature, and robots are artifacts patterned after biological creatures; otherwise, I don't know what the words are supposed to mean.
Well is it really? The entire conventional conception of an organism, is of a massively complex biochemical machine that runs without anyone in control. It is physics and chemistry all the way down - until you ultimately arrive at the mechanism that encodes the structure of proteins on DNA. Some of these proteins are enzyme catalysts that facilitate the chemistry of the cell, and power the copying process that makes more DNA.

The copying process isn't perfect, so imperfect organisms get created, but most of these die off because they can't survive and reproduce, but a few lucky mutations improve the organism and get passed on - no need for any conscious entity in control - and indeed the conventional expectation is that even the brain's consciousness would be explained by mechanisms of this sort.

It doesn't sound inappropriate to call that a biological robot, does it - except that it is probably wrong! See for example, this Skeptiko discussion of evolution:
http://www.skeptiko-forum.com/threads/behes-argument-in-darwin-devolved.4317/
Preferably read Behe's book - which incidentally hardly mentions religion.
I can imagine unconscious robots behaving much as human beings behave, but I can only imagine them, because I haven't experienced them outside of science fiction. If John Searle is right, we may never create robots mimicking much human behaviour without somehow incorporating a stuff of consciousness not reducible functionally to classical, information processing machinery. I neither believe nor disbelieve in the possibility of these robots, because consciousness is a mystery to me. I believe that intelligence can be reduced information processing, but I distinguish intelligence from consciousness.
I really doubt that Intelligence and consciousness can be separated. As I write this reply to you, I am consciously thinking about what to write.

However, it is that reducibility to classical information that is crucial, because in principle (Gedanken experiment only!) you could imagine copying the information from a brain in sufficient detail to simulate how it would behave in the next half hour. Even if the simulation has to descend to the level of QM, QM can be simulated given ludicrous amounts of computer power, though of course this would make the outcome probabilistic.

So lets imagine that the brain belongs to a man called Martin who was going to spend the next half hour in deep contemplation of some emotional subject (so no real world inputs need be considered) - perhaps he contemplates a breakup with his lover.

OK so the computer program fed with all that data simlates Martin's brain and thus also contemplates this breakup (presumably unaware that it is a computer copy).

Does the computer program feel the same searing emotions? If it doesn't, it isn't really the same as the original, so maybe you would conclude that the program would feel the same emotion.

Lets arrange that any random numbers that the simulator might employ uses pseudorandom values and so can run identically from run to run. Now suppose that we run the same simulation over and over, does it feel the same emotion over and over - does that make any real sense?

However, now consider that we have a simulation program P and some data D, where D is the state of Martin's brain just before his contemplation, and this produces some other data D1 representing that state of his brain after the period of computation Symbolically:

P+D => D1

Now the computer operates in a totally algorithmic way, and really we are looking at something with the same properties as an equation - true for all eternity. The only reason we needed the computer at all, is because stepping the operation of the computer in our heads would be rather tedious.

This reduces Martin's angst to a mathematical relationship analogous to 5+7=12. This relationship is derived from the data in Martin's brain, and the simulator, yet like any mathematical result, it was true 1000000 years ago, and will still be true at the end of the universe.

How can we claim that an equation (however complicated), or the checking of that equation can cause emotions (or any other type of consciousness) of any sort?

David
 
Last edited:
It doesn't sound inappropriate to call that a biological robot, does it - except that it is probably wrong!
I get your point, but in common parlance, a robot is not biological (certainly not natural) at all. Since Physics is also an artifact, and is always evolving, physics all the way down doesn't rule much out. Standard physics ceased to be deterministic a century ago, and whether or not an account of consciousness requires post-classical physics (or quantum computation), it need not be "mechanical" (or deterministic).

I really doubt that Intelligence and consciousness can be separated.
A computer playing chess is "intelligent" in my lexicon, and I don't suppose it plays the game consciously, but I have no way of knowing really.

As I write this reply to you, I am consciously thinking about what to write.
I suppose you write both consciously and intelligently. A spider hunts prey intelligently and may hunt consciously. A computer plays chess intelligently and unconsciously. I suppose so anyway. I can only suppose so. I can't observe another being's consciousness, as far as I know.

... you could imagine copying the information from a brain in sufficient detail to simulate how it would behave in the next half hour.
I doubt that you could. With or without QM, complex dynamic systems are chaotic. We carefully, laboriously, tediously construct artificial systems to be predictable. I do it for a living and spend most of my time debugging (making unpredictable machinery predictable).

Does the computer program feel the same searing emotions?
If you could really build the machine you imagine, maybe it would, but we're discussing science fiction here. In reality, you can't even predict the trajectory of three bodies interacting gravitationally (classical gravity) indefinitely, and I suppose the three body system is unconscious, so the question of consciousness has little to do with predictability.

Now suppose that we run the same simulation over and over, does it feel the same emotion over and over - does that make any real sense?
How any system feels is not something I can know. I can't even know how you feel. I can read a symbolic description of your feelings and interpret it in terms of my feelings, but I seem limited to this experience of your feelings.

How can we claim that an equation (however complicated), or the checking of that equation can cause emotions (or any other type of consciousness) of any sort?
An equation (or algorithm) is an abstraction. An actual machine somehow simulating my brain's information processing is necessarily concrete. It is material and occupies space and time. It is "algorithmic" only in the sense that an isomorphism exists between the symbols of an abstract system and concrete, material objects interacting in space and time. How can consciousness arise from such a thing? I don't know. That's what I'm here to discuss.

If you're suggesting that consciousness cannot arise from matter occupying space and time, I'm curious to know your alternative, but simply naming an otherwise mysterious "stuff of souls" distinct from material stuff (as in Cartesian dualism) doesn't add much to my understanding. I already have the word "consciousness", and I don't object to "soul", but the word doesn't get me one step closer to a disembodied or immortal consciousness or an explanation of near death experiences.
 
Last edited:
When we are at our most plastic and malleable, from the earliest age as we develop a sense of self, we are conditioned and programmed (I am tempted to use the word ‘hypnotised’) to imbue meaning and purpose on our lives. The effect is irresistible; I fully understand that it ‘feels’ like we live in a meaningful universe, even though I (nor anyone else here) can define what that means.
 
Last edited:
A computer playing chess is "intelligent" in my lexicon, and I don't suppose it plays the game consciously, but I have no way of knowing really.

A computer playing chess is not intelligent - nor is this a simple matter of lexicon, ...rather it is a matter of logical objects useful in assembling higher arguments. It is like walking into a bank with a bag of rocks to deposit and saying to the teller 'rocks are money in my lexicon' - and suddenly you are a millionaire. No it does not work this way. One cannot assemble the higher argument of an economy if the logical object of money can be anything you deem it to be.

In the same way, one cannot assemble higher arguments regarding consciousness and meaning if you define intelligence in this manner. In the case of a computer, one has at their avail the entire repertoire of code and machine learning from which to inspect and ensure that no non-deterministic actions occurred on the part of the computer. The computer only executes in context of its Machine Learning or rote script domain.

We will face this argument in a decade or two, when political parties want to assign 'rights' to Artificial Intelligence cores. Thereafter, your vote can be outnumbered by the 'votes' of three or four computers, which are considered 'conscious' because we were permissively weak in our rigor on logical objects.
 
Last edited:
A computer playing chess is not intelligent - nor is this a simple matter of lexicon, ...rather it is a matter of logical objects useful in assembling higher arguments. It is like walking into a bank with a bag of rocks to deposit and saying to the teller 'rocks are money in my lexicon' - and suddenly you are a millionaire. No it does not work this way. One cannot assemble the higher argument of an economy if the logical object of money can be anything you deem it to be.

In the same way, one cannot assemble higher arguments regarding consciousness and meaning if you define intelligence in this manner. In the case of a computer, one has at their avail the entire repertoire of code and machine learning from which to inspect and ensure that no non-deterministic actions occurred on the part of the computer. The computer only executes in context of its Machine Learning or rote script domain.

We will face this argument in a decade or two, when political parties want to assign 'rights' to Artificial Intelligence cores. Thereafter, your vote can be outnumbered by the 'votes' of three or four computers, which are considered 'conscious' because we were permissively weak in our rigor on logical objects.

What makes you think you are not programmed in a similar way? How are your actions, thoughts and behaviours more than the sum of all your previous inputs? What would you be now if you had been 100% sensorily deprived since conception?
 
What makes you think you are not programmed in a similar way? How are your actions, thoughts and behaviours more than the sum of all your previous inputs? What would you be now if you had been 100% sensorily deprived since conception?
That is not the salient question. I do not have the tools to answer that, so it is a red herring question. Like asking 'if life did not originate on Earth, where did it originate then?' - sounds intelligent, but it is not pertinent, answerable nor critical path at our current level of discussion development.

I do not have to know what a deuce-and-a-half is, in order to be able to tell that a bicycle is not a deuce-and-a-half. I do not have to know what precisely causes my cerebral dissents and desires, in order to say what is NOT intelligence.

The Null in this case is Monism... I do not have to define what Plurality indeed is - as I cannot possibly do that. Nor do I have to ...all I have to do is falsify Monism....

If that makes sense. I cannot approach the problem from the other direction - as that would constitute a procedural fallacy.
 
Last edited:
That is not the salient question. I do not have the tools to answer that, so it is a red herring question. Like asking 'if life did not originate on Earth, where did it originate then?' - sounds intelligent, but it is not pertinent, answerable nor critical path at our level of discussion development.

I do not have to know what a deuce-and-a-half is, in order to be able to tell that a bicycle is not a deuce-and-a-half. I do not have to know what precisely causes my cerebral dissent and desires, in order to say what is NOT intelligence.

The Null in this case is Monism... I do not have to define what Plurality indeed is - as I cannot possibly do that. Nor do I have to ...all I have to do is falsify Monism....

If that makes sense. I cannot approach the problem from the other direction - as that would constitute a procedural fallacy.

Lol. I find those 3 simple questions much harder to hand wave away than you. Think on them. They are questions that are central to the argument you made, entirely pertinent but do not assume any preferred metaphysic.
 
Lol. I find those 3 questions much harder to hand wave away than you. Think on them. They are questions that are central to the argument you made, entirely pertinent but do not assume any preferred metaphysic.

They are pertinent to the discussion of the hard problem of consciousness.

They are not pertinent to the discussion of the logical object framing, called intelligence; which was our context.
 
Last edited:
They are pertinent to the discussion of the hard problem of consciousness.

They are not pertinent to the discussion of the logical object framing, called intelligence; which was our context.

I think they are pertinent to intelligence in the way that you framed it. In any event, you shifted the context to consciousness here:

In the same way, one cannot assemble higher arguments regarding consciousness and meaning if you define intelligence in this manner. In the case of a computer, one has at their avail the entire repertoire of code and machine learning from which to inspect and ensure that no non-deterministic actions occurred on the part of the computer. The computer only executes in context of its Machine Learning or rote script domain.

Up to you though. I think the questions are simple but lead to awkward places ultimately. Probably best swept under the rug.
 
I think they are pertinent to intelligence in the way that you framed it. In any event, you shifted the context to consciousness here:

Grasping at a straw man. I did not shift the context to consciousness. I simply framed WHY the discipline of a logical object is important for higher order logic, later on... Telling a child to eat their vegetables so they can grow up to be big and strong - does not mean that you are telling them they are big and strong right now. It is a parabolic reference - not a context change.

Up to you though. I think the questions are simple but lead to awkward places ultimately. Probably best swept under the rug.

Yes, assuming matter and energy to be monist source of the hard problem of consciousness involves a very large miracle, just as big as the miracle of assuming consciousness to come from outside monism... I bristle at the latter miracle sure - just as do you. :) But I am not equipped to answer which miracle is the true one. Miracle claimers however, should be forthright about their hidden miracles, and not sweep them under the rug. Agreed.
 
Back
Top