Ex-Stargate Head Ed May Unyielding Re Materialism, Slams Dean Radin |341|

Nate, I really appreciate your post. I was able to follow much of it.

The quote above gave me a good chuckle though. I apologize for invalidating the use of "everyone". ;)

This curly beast? You've really never in your lifetime seen a picture of it?
800px-Mandel_zoom_00_mandelbrot_set.jpg


Maybe I'm showing my age - in the 1980s you couldn't get away from the thing. Like Rubik's Cube and the Space Shuttle, it was everywhere. Rock bands, airport novels, sides of buses, children's lunch boxes... Well, it seemed like it at the time.

regards, Nate
 
Last edited:
You and I might not find that primes are anything to elicit an emotional response, but savants do love their numbers and it seems plausible that primes could elicit a special kind of emotional response to them.

I'm no savant, but as a kid I *did* feel very fondly towards primes when I learned about them. The idea that numbers weren't just dead, interchangeable units but each one was special and unique, and in a deeply unpredictable way -- that warmed my heart in a way it's hard to describe. Even numbers have personalities! So even if we're all 'just numbers' we could still be individuals.

Nate
 
3) I rather think extensions of physics can't really explain ψ phenomena, or mind, or describe a mental realm anyway. My argument is that physics works in an almost mechanical way. You set up the equations, and (disregarding issues to do with the practicality of solving the equations) out comes the answer (OK sometimes as a set of probabilities). This means that you don't get an explanation of David Chalmers' "Hard Problem" - how can we actually experience anything, as opposed to compute stuff (like a computer!).

I agree. Chalmers' Chinese Room (*) summarises the problem neatly: that our experience of feeling doesn't seem to arise from any mechanical operations our matter-environment performs. We don't feel our extended matter-environment (eg, paper, cards) even though it stores information. So why would our internal matter-environment (brain cells) give us a sense of feeling? (**)

(*) although it makes a LOT of unwarranted assumptions - eg, a 'machine' capable of answering a real conversation and feeling like a real person must actually store the information that describes that personality, and there'll be a lot of it, maybe an infinite amount
(**) and it also assumes, again without a good reason, that what we think of as 'matter' is in fact 'dead'. Which almost certainly isn't the case. We know animals have feeling; do bacteria have feeling too? At what point does the subjective experience 'emerge'? The idea of 'emergence' has been tossed around a lot by materialists but it makes very little sense; in weather, eg, a cloud might 'emerge' from the actions of a million water drops but those drops remain drops, and all the behaviour of the cloud IS identical with the behaviour of the drops that make it up. A consistent materialist position on emergent mind then would have to be that the smallest particle/wave of matter must have some tiny amount of subjective feeling. But unless I'm mistaken, very few materialist philosophers seem to hold to this view.

Returning to my first point for a moment, I think this may illustrate a problem with physics itself - once you allow large numbers of extra (and unobserved) dimensions in a theory, there are simply too many possible false trails.

Yes, and if you agree with the String Theory critics like Peter Woit of Not Even Wrong ( http://www.math.columbia.edu/~woit/wordpress/ ), which I tend to do, though he's no friend of psi -- this 'too many dimensions' problem is a huge blind alley in physics. An interesting question, if we acknowledge it's a problem (many physicists still don't), is how we got to this point, and when.

Regards, Nate
 
Chalmers' Chinese Room (*) summarises the problem neatly: that our experience of feeling doesn't seem to arise from any mechanical operations our matter-environment performs. We don't feel our extended matter-environment (eg, paper, cards) even though it stores information. So why would our internal matter-environment (brain cells) give us a sense of feeling?

Do you have a source on Chalmers' Chinese Room?
 
Do you have a source on Chalmers' Chinese Room?

After the Turing Test, it's probably the second most well-discussed idea in computing and Artificial Intelligence, so, just off the top of Google:

https://en.wikipedia.org/wiki/Chinese_room
http://www.iep.utm.edu/chineser/
https://plato.stanford.edu/entries/chinese-room/

From that last, the Stanford Encyclopedia of Philosophy:

The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle (1932- ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument.



I find the Chinese Room particularly amusing because I'm married to a Chinese speaker, and so I'm in the very real position of Searle's imagined person myself. I see Chinese characters and hear words I don't recognise; I look them up on the Internet; and I map those sounds, letter-strings and characters to ideas. I have used Google Translate to try to translate Chinese poetry. And although I have been manipulating strings of symbols via machinery to produce output, unlike Searle's imaginary person I find myself (slowly) *recognising* the formerly opaque symbols, linking them to ideas, and attaching emotional value to them.

I've learned, eg, that 'wo ai ni' is not just a random string of symbols, but has a very specific meaning. I know that 'xia' and 'shang' are two words that sound frustratingly alike but their characters are a much better clue to their meaning, so I feel happy when I see the characters: 'hey, I know this one!'

The imaginary Chinese Room operator doesn't do this; *doesn't* learn the symbols, *doesn't* colour them with their emotions, doesn't add anything to the process other than dutifully following orders. Which is why I said as a thought-experiment it has some major problems in its assumptions: it assumes both that the 'machine' of cards and instructions is powerful enough to simulate a human, and that the actual human doesn't act like a human.

I guess I have literally become the Cartesian 'ghost in the machine' that scares materialist philosophers...

O the other hand, we all do spend most of our lives in Chinese Rooms, crude AIs build out of paper and documents: corporations and governments. And as machines which extend beyond humans and use humans as parts, we see that they do have their own kind of weird 'intelligence' but that it's not very bright and certainly not very human. As the Chinese Room was an idea first posed in 1980, it feels like a product of its time; a social moment when it was very popular to argue that human insight was more powerful than the 'social machines' like governments in which we were embedded, and to dream that simply deconstructing all that social machinery would cause humans to instantly become enlightened, superintelligent libertarians.

But in the Internet age, I think we feel a lot more keenly that in a lot of ways we *are* our environments. Or that not just our behaviour but our own personality meshes with and is reinforced by the company we keep and the rules we choose to follow.

None of which really quite breaks the Chinese Room concept; I think it's a great argument (yet one which Chalmers himself doesn't follow to its natural conclusion!) that the human mind/soul/spirit is something distinct from our extended bodies, and that the emotion and feeling, we have toward the machines we build around us is something not a product of the physical realm but something we bring to it from outside. But the man/machine symbiosis we live in as a matter of course in the 21st century does make arguing about it a bit more subtle.

Eg 'here' 'we' 'are' merely exchanging letters on a screen. None of us are 'really' 'present' and this is not a 'room'. Yet we can exchange ideas and not just ideas, but something beyond them.

We *could* just be structures of information modelling more structures of information. But somewhere that information has to 'bottom out' into something like an emotion. We know, roughly, how to decompose information into its smallest parts (bits). But what's the smallest emotion? Pleasure? Pain? Well, what about something like wonder or awe? The sense we seem to get, when examining our own minds and especially our dreams, is that our emotions are *big*. Not just bits expressing 'yes' or 'no'. They seem to be vast, extended structures, containing lots of information yet they are somehow unified; an emotion, unlike a piece of information, seems to be a whole. I can't think of any data structure that works like that. Or anything really in the physical world, except possibly a waveform as the closest physical analogy.

This idea I feel that 'emotions are large yet unified information-bearing structures, possibly infinite, yet able to be searched instantly' is why I think that a computer program successfully capable of carrying on a dialogue with a human and convincing the human that it feels these emotions, would have to model these emotional structures in quite some detail. And if they're truly infinite sized structures (and can't be reduced to a formula, like eg the Mandelbrot Set can) then the poor computer is going to have to hold a model of that structure as data. And it's probably not going to fit. (Let alone where it's going to get a model of human emotions when the most important thing about humans is that we're utterly clueless about how to describe or even feel our emotions).

On the other hand... perhaps the simplest emotion might be 'desire'. And might a desire be something like a force? Might electrons *literally* want to be going wherever they're being pulled?

George Lakoff has pointed out that we think in physical metaphors. 'Attraction' is very often a metaphor for love. 'Iron loves the magnet'. 'Nature hates a vacuum'.

Might that perhaps be literally true?

Regards, Nate
 
Last edited:
After the Turing Test, it's probably the second most well-discussed idea in computing and Artificial Intelligence, so, just off the top of Google:

https://en.wikipedia.org/wiki/Chinese_room
http://www.iep.utm.edu/chineser/
https://plato.stanford.edu/entries/chinese-room/

From that last, the Stanford Encyclopedia of Philosophy:

The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle (1932- ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument.

Regards, Nate

Sorry to have wasted your time, I should have been more specific. I am aware of Searle's Chinese room, but am not aware of Chalmers offering an alternative of it.
 
it assumes both that the 'machine' of cards and instructions is powerful enough to simulate a human, and that the actual human doesn't act like a human.

I don't think the Chinese room experiment does this, does it not actually display the opposite? Searle here says the Chinese room is a refutation of computational theory of mind -
This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.

"Could a machine think?" On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.
(http://cogprints.org/7150/1/10.1.1.83.5248.pdf)



Also there is no reference to feelings(qualia) at all in the original argument, its entirely about syntax not being sufficient for semantics.

((((Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics., https://en.wikipedia.org/wiki/Chinese_room#Complete_argument.))))
I don't see any resemblance in this argument to what you posted as it being, which is why I asked if you had a source on a Chalmers variant because these things are not similar.

EDIT: slight edit to readability
 
Last edited:
Sorry to have wasted your time, I should have been more specific. I am aware of Searle's Chinese room, but am not aware of Chalmers offering an alternative of it.

Oh, right. D'oh. Chalmers not Searle. Yes, my brain glitched because I confused the two.

Refreshing my brain with Wikipedia and the Stanford Philosophy Encyclopedia: Chalmers hasn't proposed an alternative to the Chinese Room but an interpretation of it. I think he's a strong fan, not critic, of Searle, and is the one who proposed 'panprotopsychicism', the idea I was groping to describe (in fact I think I got it from reading a pop-sci book by/about his take on Searle years ago).

Huh, and he's the one who even coined the term 'hard problem' in the first place? Good grief, surely it wasn't as late as 1995? Really?

Chalmers' approach is very much connected with subjective feeling - perhaps you're arguing that Searle's original wasn't? But I thought that was his point?

https://en.wikipedia.org/wiki/David_Chalmers

Chalmers is best known for his formulation of the notion of a hard problem of consciousness in both his 1996 book and in the 1995 paper "Facing Up to the Problem of Consciousness". He makes a distinction between "easy" problems of consciousness, such as explaining object discrimination or verbal reports, and the single hard problem, which could be stated "why does the feeling which accompanies awareness of sensory information exist at all?​

..

He further speculates that all information-bearing systems may be conscious, leading him to entertain the possibility of conscious thermostats and a qualified panpsychism he calls panprotopsychism. Chalmers maintains a formal agnosticism on the issue, even conceding that the viability of panpsychism places him at odds with the majority of his contemporaries.​

Chalmers also picks up on what I just mentioned, that the state of the operator and the state of the room aren't the same (well, leaving out that symbiosis business, which I think is important):

Chalmers (1996) notes that the room operator is just a causal facilitator, a “demon”, so that his states of consciousness are irrelevant to the properties of the system as a whole. Like Maudlin, Chalmers raises issues of personal identity—we might regard the Chinese Room as “two mental systems realized within the same physical space. The organization that gives rise to the Chinese experiences is quite distinct from the organization that gives rise to the demon's [= room operator's] experiences”(326).​


The really cool thing (to me) is that, as in so many things in religion/psi, physics AND computing/information theory, Leibniz got there first! He counts as both one of the first computer designers - worked with logic and binary maths, thinks like a programmer - and yet his theory of 'preexisting harmony' is 180 degrees opposed to reductionism or even computation. I really, really want to know what he was thinking when he came up with that. And so of course he did the Chinese Room first: 'Leibniz' Mill'


Searle's argument has three important antecedents. The first of these is an argument set out by the philosopher and mathematician Gottfried Leibniz (1646–1716). This argument, often known as “Leibniz’ Mill”, appears as section 17 of Leibniz’ Monadology. Like Searle's argument, Leibniz’ argument takes the form of a thought experiment. Leibniz asks us to imagine a physical system, a machine, that behaves in such a way that it supposedly thinks and has experiences (“perception”).

17. Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for. [Robert Latta translation]​

Regards, Nate
 
Last edited:
his theory of 'preexisting harmony' is 180 degrees opposed to reductionism or even computation.
This is an interesting theory. I'm reading more about it now.


(in fact I think I got it from reading a pop-sci book by/about his take on Searle years ago).

The Consious Mind? I haven't read it myself yet, just curious.

I think he's a strong fan, not critic, of Searle

They critique each other and maybe its just Searles characteristic surliness but he seems to really go after Chalmers. (http://www.nybooks.com/articles/1997/05/15/consciousness-and-the-philosophers-an-exchange/)
 
The really cool thing (to me) is that, as in so many things in religion/psi, physics AND computing/information theory, Leibniz got there first! He counts as both one of the first computer designers - worked with logic and binary maths, thinks like a programmer - and yet his theory of 'preexisting harmony' is 180 degrees opposed to reductionism or even computation. I really, really want to know what he was thinking when he came up with that. And so of course he did the Chinese Room first: 'Leibniz' Mill'
In many ways Leibniz' Mill gets to the heart of the problem in a way which makes it easy to visualise. We can build computers from anything, the modern trend is to use tiny silicon chips crammed full of individual switches, which operate both very quickly and very reliably. We could do the same thing with a mechanical device built of cogs and levers, however it would operate very slowly and require mountains of physical space. But it does allow us to understand that there is nothing in the cogs and levers that could ever feel or experience anything.
 
Hi LetsEat. Great comments. I'm trying to parse out your reply here a bit, because either your or Searle's words are confusing me:

I don't think the Chinese room experiment does this, does it not actually display the opposite? Searle here says the Chinese room is a refutation of computational theory of mind

I agree that Searle's original Chinese Room was intended as, and felt to be, a refutation of the computational theory of mind, yes. Inasmuch as I understand what 'computational theory of mind' means, which I may not; I'm a programmer, not a philosopher, and we're a rather blunt-force species who don't really grasp the sort of subtleties that modern philosophy is made of.

When I said

'it assumes.. that the 'machine' of cards and instructions is powerful enough to simulate a human'
and you said:

I don't think the Chinese room experiment does this, does it not actually display the opposite?​

I meant literally that: it is my understanding that the Chinese Room was a reply to the Turing Test, and therefore that it's a requirement of the machine in the Chinese Room that it must be able to pass the Turing Test. It's not just, eg, a crossword puzzle solver. Nor is it a Twitter chatbot that's trying to sidle into your mentions and trick you into clicking on one weird link. And it's not even just a Siri or a Jeopardy solver like Watson. It must be able to carry on a conversation with a hostile, inquistive, emotionally intelligent human who is attempting to determine if the machine is sufficiently humanlike to pass for a human. At least that's my reading of it.

The Turing Test is not about how easily you can trick a human for a brief moment of time; if you wanted that, a dressmaker's dummy would already have passed. You have to give the human the ability to ask real, searching questions. In a real live-fire Turing Test situation, a well-trained human can utterly destroy your basic chatbot in a couple of moves. Just ask it questions about its love life, ask it to summarise Romeo and Juliet and write a poem... and ask the same question and keyword several times to make sure all canned answers are flushed out of the system.

This is why I say that passing the Turing Test is really, really, really, REALLY hard. To pass the Turing Test you need to implement a full simulation of a human, and that includes an emotional core. An entire simulated life. An entire simulated social network of family and friends. An entire simulated knowledge of pop culture, politics, music, history.... And the presupposition of Searle's Chinese Room, that he doesn't perhaps grasp HOW difficult it is and how much data would be involved, is that the AI built out of paper can do this. It would be the equivalent of several skyscrapers worth of paper, I think. Maybe a city. Or a continent.

Of course whether your thought experiment can actually be built is probably not the core business of philosophers. Searle is telling a story to get an emotional reaction, not building a machine. But building machines that work IS the core business of programmers, and that's one reason why the Chinese Room always tends to raise our eyebrows. 'Do you realise what this thing would actually BE?'

It's not a five-line Eliza script, is what I'm trying to say.

(I'm a weird programmer because I also believe in the soul. But I appreciate rigor and correctness, because things in my world simply break if you don't have those. And though I agree with his conclusion, I've always felt that Searle, like many philosophers, was trying to pull a few fast ones.)

Next, when Searle says:

The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality.

On that, I sorta agree and sorta disagree. My feeling has always been that a human living in the Chinese Room and 'running the program' (and since it's a story that invites one to use one's imagination, again my imagination goes to the SHEER SIZE of the program, and that it would be this huge Gormenghast-like cavern-cathedral-city of vast filing towers, with a whole support staff living and breathing the maintenance of the machine, it couldn't just be this one guy)... but again, being 1) a programmer who's run simulations of a program on pencil and paper by hand and 2) an English speaker who's learning Chinese, I can tell you that in both cases, you can't simulate or live within a system without somehow coming to *feel* that system. What those codes mean, what they do in the outside world beyond your office.

I mean, this is literally what I do as a day job. I program computers. I learn symbol systems that start out as alien languages. I figure out what they mean and do. I assign emotional states to numbers. ('404' is a bad number because it means 'the web server is down and lots of people are unhappy with me').

But yeah, the slow feeling for, eg, a foreign language, or a mathematical algorithm, or a piece of music written as mathematical notation on paper, that as a human you pick up as part of 'executing a program' that initially you don't understand, is definitely a very *different* kind of intentionality than the intentionality of the machine, which although it has some kind of 'purpose' or inevitability of its rule-following, and can 'process information', can't really be said to have 'feeling'.

We and Searle and Chalmers are all probably in agreement here, I think?


And finally, I think this is you and Wikipedia saying:

Also there is no reference to feelings(qualia) at all in the original argument, its entirely about syntax not being sufficient for semantics.​

((((Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics., https://en.wikipedia.org/wiki/Chinese_room#Complete_argument.))))

I don't see any resemblance in this argument to what you posted as it being, which is why I asked if you had a source on a Chalmers variant because these things are not similar.
On Chalmers being about subjective awareness, I think I posted already; and I confess I'm pretty shocked if Searle somewhere argues that his take on the Chinese Room is NOT about subjective awareness. That's literally what everyone else who's been arguing about it for 30-odd years thinks it's about... isn't it?

Further though, I guess I don't quite grasp the syntax/semantics distinction.

This is me as a programmer again: I live in a world where everything is syntax, and that includes 'semantics'. In programming, 'semantics' is a term of art that means just what happens when a program runs: the 'meaning' of a symbol is what causes it to appear, and what appears somewhere else when it appears. But all those 'things' are generally chunks of 'syntax', ie, data. At some point down the line that data becomes 'a physical rod moves and pokes something, or a camera takes an image', but if you look at that with a programmer's eye you see: a row of atoms changed their state. That row of atoms in my office making up my table looks very much like the row of atoms in the RAM chip in my computer. So, isn't everything in the world just rows of atom-shaped bits? It's a very simplifying idea that feels very powerful.

I think Korzybski had a similar idea of 'semantics' when he developed 'General Semantics': the idea (as I loosely understand it) that the 'meaning' of a word or idea is simply 'what happens in your body, including your entire nervous system and your gut and your muscles, when you experience that word or idea'.

Everything in the physical word, that's rows of atoms, in other words, can be thought of as rows of symbols. So all semantics is just someone else's syntax, and vice versa. That's the way us programmers think. We have a hammer called computing and we see everything as a nail.

Having said that, I get that emotion, feeling, subjective awareness is a special kind of semantics - the semantics of the human soul, not the human body - which quite possibly can't be reduced to 'rows of atoms'. And I think perhaps that's what Searle meant when he wrote 'semantics'? So perhaps this is a clash of definitions between how programmers think of 'semantics' and how philosophers think of it.

Regards, Nate
 
Last edited:
Searle in 1997 (http://www.nybooks.com/articles/1997/05/15/consciousness-and-the-philosophers-an-exchange/) really does hate on Chalmers' panpsychism ('conscious thermostats') and Chalmers himself really walks it back right to the brink of saying 'I actually don't believe this so stop calling me out on it'.

But if we were to take conscious arising from matter seriously (as Searle does) it seems obvious to me that there MUST be some kind of 'bit of consciousness' (*) - so something like a 'conscious thermostat' MUST exist in exactly the same way that, eg, a transistor exists as an information processing element. Obviously such an elementary consciousness wouldn't be as 'big' as a human or animal mind, so it wouldn't have the same properties! Searle's grumpy argument here comes across to me like someone saying 'look, a transistor CAN'T be a computing element! It's not an IBM mainframe! It won't even run OS/360! How can you say it's a computer?'

That's the world imagined from physicalism though. If we believe psi and the universe glimpsed in channelled communications, we get communicators saying both 1) the entire universe IS in fact conscious, down to the atoms, in a vast sea of consciousness, in exactly the same way that Searle thinks is 'absurd' and Chalmers wants to 'explore' but doesn't want to commit to, but also 2) the human mind/soul is not the same as the human body, and lives beyond it, though both perhaps are aspects or projections of a much deeper, shared universal consciousness that maybe eons ago split off a tiny part of itself that wanted to experience the pain and suffering of isolation, for some odd reason which was quite possibly a bit of a silly mistake and something we're all going to laugh about millions of years from now. Or maybe it wasn't even a mistake but something deeper, a necessary growing/learning stage that we still need to evolve past.

Watching materialist philosophers like Chalmers and Searle argue past each other just frustrates me. They seem so close to the truth and yet can't take that last tiny step. Mind as primary substance, rather than matter, just seems to explain so much and is so much simpler.

(*) I'm not actually sure there can be a 'bit of consciousness'. Maybe there can. That's how a programmer would view it, because that's how information works: it's separated, organised into individual spaces. But channelled documents suggest that 'separation' is an illusion and that one of the odd, non-physical features of mind is that it can't entirely be separated; if you get one piece, you get all of it. That somehow all minds are aspects of the One Mind that is the only thing that really Is. I don't really know how to process that; it just feels somehow 'right'.

Regards, Nate
 
I agree. Chalmers' Chinese Room (*) summarises the problem neatly: that our experience of feeling doesn't seem to arise from any mechanical operations our matter-environment performs. We don't feel our extended matter-environment (eg, paper, cards) even though it stores information. So why would our internal matter-environment (brain cells) give us a sense of feeling? (**)

(*) although it makes a LOT of unwarranted assumptions - eg, a 'machine' capable of answering a real conversation and feeling like a real person must actually store the information that describes that personality, and there'll be a lot of it, maybe an infinite amount
(**) and it also assumes, again without a good reason, that what we think of as 'matter' is in fact 'dead'. Which almost certainly isn't the case. We know animals have feeling; do bacteria have feeling too? At what point does the subjective experience 'emerge'? The idea of 'emergence' has been tossed around a lot by materialists but it makes very little sense; in weather, eg, a cloud might 'emerge' from the actions of a million water drops but those drops remain drops, and all the behaviour of the cloud IS identical with the behaviour of the drops that make it up. A consistent materialist position on emergent mind then would have to be that the smallest particle/wave of matter must have some tiny amount of subjective feeling. But unless I'm mistaken, very few materialist philosophers seem to hold to this view.



Yes, and if you agree with the String Theory critics like Peter Woit of Not Even Wrong ( http://www.math.columbia.edu/~woit/wordpress/ ), which I tend to do, though he's no friend of psi -- this 'too many dimensions' problem is a huge blind alley in physics. An interesting question, if we acknowledge it's a problem (many physicists still don't), is how we got to this point, and when.

Regards, Nate
Yes, I think Searle got to the heart of the problem with his Chinese Room argument against AI, but somehow he didn't follow through with the significance of his argument - it didn't just damn AI, but also materialistic explanations of consciousness in general. He used to talk about the brain as a 'meat machine' that he seemed to think made it different!

David Chalmers' contribution was related - he came up with the idea of the 'Hard Problem' - how do we explain any actual experience in terms of interacting particles and fields. I think that has as much force as ever - people just try to dodge it.

Yes, Peter Woit points out that string theory lacks evidence, but what I don't think he realises, is that this problem may go further back. My feeling is that physics theory has exploded far ahead of the evidence, and as a result, it had had to use ever more tenuous arguments to justify itself at all. For example, there was a recent article in Scientific American casting extreme doubt on the the very idea of the Big Bang, and pointing out that the theory was sufficiently flexible that it could fit a whole range of data, including that from the microwave background. That data has to be adjusted for contributions from our galaxy, and from even more local contributions from the solar system. After all that, people start to look for small fluctuations in the signal that supposedly represent events a fraction of a second after the big bang!

A part of the Skeptiko argument (so to speak) is that science isn't doing so well even on its home territory, never mind when it pronounces on the reality or otherwise of ψ or NDE's!

Hence I am rather wary about 'explaining' other realities in terms of extra dimensions - theory can make us all accustomed to such concepts long before there is any evidence that they are real!

David
 
Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.

I strongly agree. This problem has had me grinding on a commonsense solution for many years. There was a paper in the day, which is more technical and addresses the argument formally.
Cognitive Science and the Problem of Semantic Content - Ken Sayre Synthese 70 (2):247 - 269 (1987)
The problem of semantic content is the problem of explicating those features of brain processes by virtue of which they may properly be thought to possess meaning or reference. This paper criticizes the account of semantic content associated with Fodor's version of cognitive science, And offers an alternative account based on mathematical communication theory. Its key concept is that of a neuronal representation maintaining a high-Level of mutual information with a designated external state of affairs under changing conditions of perceptual presentation.
 
Last edited:
Yes, Peter Woit points out that string theory lacks evidence, but what I don't think he realises, is that this problem may go further back. My feeling is that physics theory has exploded far ahead of the evidence, and as a result, it had had to use ever more tenuous arguments to justify itself at all. For example, there was a recent article in Scientific American casting extreme doubt on the the very idea of the Big Bang, and pointing out that the theory was sufficiently flexible that it could fit a whole range of data, including that from the microwave background. That data has to be adjusted for contributions from our galaxy, and from even more local contributions from the solar system. After all that, people start to look for small fluctuations in the signal that supposedly represent events a fraction of a second after the big bang!
was not aware... thx for the post.

I wonder if this is a revenge of the machines kinda moment... i.e. when our measurement tools become so advanced/refined that what they produce begs to be misinterpreted. kind of like the .001 degree increase in global temperatures.... or the uproar over the use of bayesian analysis in the social sciences.
 
was not aware... thx for the post.

I wonder if this is a revenge of the machines kinda moment... i.e. when our measurement tools become so advanced/refined that what they produce begs to be misinterpreted. kind of like the .001 degree increase in global temperatures.... or the uproar over the use of bayesian analysis in the social sciences.
Yes, I feel some of these people have been given enough rope to hang themselves, and they have done just that in a metaphorical sort of way. The total lack of concern about statistical significance in these "hottest year ever" claims is just way over the top. BTW, I think you meant 0.01 - you added an extra zero, but perhaps you were anticipating future developments in the subject :)

The Big Bang article is behind a paywall, however there is a copy elsewhere on the internet:

https://www.cfa.harvard.edu/~loeb/sciam3.pdf

David
 
The universe is information and symbols?

If you look at it like a programmer, sure! Physical space is a giant RAM buffer where each bit is 'a primordial sub-subatomic particle is here, or isn't'.

I mean, no, that's a Newtonian view, pre-20th century even, and we don't yet have a physical theory that gets us from GR's spacetime and QM's fuzzy probability infinity spaces to something as clear-cut and simple as 'here is a one bit, here is a zero bit'. But if you talked to someone like Stephen Wolfram, that's EXACTLY how he views the universe - as a 'cellular automaton' that happens to be realised on the lowest-possible 'hardware layer' - and he's arguing that as something like the patron saint / spirit animal of the programming viewpoint. This way of thinking ('EVERYTHING is software, EVERYTHING') is so innate to programmers that we don't really even grasp why so many other physicists and philosophers have difficulties with it.

But this idea ('everything is software; the real world is a special case') underlies why programmers, particularly, have a near-religious fascination with the idea of 'nanotechnology' (and why there's so much hype around '3D printers', seen as the first small step towards nanotech). Our life is spent in writing code that writes other code to RAM buffers. We naturally see physical space as a RAM buffer that - for some gosh-darned annoying reason we don't understand - we CAN'T write to, at least not directly. Eric Drexler's Nanotech idea promises a 'fix', or at least a hacky workaround, for either this bug or this security feature (same thing, to a hacker): an array of self-replicating tiny robots which will take software commands from us and then directly edit matter at an atomic level. Nanotech becomes a giant cable into the universe's master RAM bank. We'll 'fix the world' by rewriting it atom by atom until it's exactly what we want! We'll rewrite our own bodies! We'll recode our own DNA! We'll upgrade our brains to diamond chips!

Yes, this is an insane power-hungry dream, and it completely ignores that by all reasonable standards nanotech ALREADY exists, and it's biological carbon-based life, and biological life is vastly complicated and has a whole lot of limitations that our fantasy Drexler nanotech doesn't (because it's a fantasy); it needs power sources, it can't arbitrary shuffle any atoms, just certain ones (mostly carbon), it.. eats and craps and breeds, etc. Nanotech, if we built it, would have all these features. We'd be better to just fire up gene sequences in a CAD program and splice them into bacteria (which we're already doing, and God help us).

But programmers, while loving the idea that 'physical space is a RAM buffer, we should be able to write to it, and if we do we'll achieve $win_condition' - we also generally have a kind of terror of actual physical biology. Not sure why. It's just something about the culture. Why hackers tend to sneer at the body, even the brain, and call it 'meat'. The assumption being that silicon chips keep getting faster year over year so therefore computers are more powerful than human brains. A dream that we should all rewrite our bodies to be steel and diamond and silicon (while somehow preserving our sense of consciousness and self-awareness) and then we'll be IMMORTAL, MUHAHAHA!!!! Crush puny humans! TRANSHUMANISM!

This was kind of cute back in the 90s when hackers just read lots of sci-fi, but now Silicon Valley has hundreds of billions of dollars of capital and the ability to carry out Bond Villain level schemes and actually many of these billionaires have cult-like beliefs in Transhumanism and Disruption and can do serious damage. Like, replace millions to billions of people with robots and then deny them all food, water, housing, justice and healthcare (because they're 'inferior' - obviously they are, or they'd have a high paying tech job as one of the dozen remaining programmers at the three remaining companies writing the robots!), sort of damage.

So it's not quite such a funny belief anymore.

And of course it doesn't do anything to explain our sense of self-awareness.

Regards, Nate
 
Last edited:
Peter Woit points out that string theory lacks evidence, but what I don't think he realises, is that this problem may go further back. My feeling is that physics theory has exploded far ahead of the evidence, and as a result, it had had to use ever more tenuous arguments to justify itself at all. For example, there was a recent article in Scientific American casting extreme doubt on the the very idea of the Big Bang, and pointing out that the theory was sufficiently flexible that it could fit a whole range of data, including that from the microwave background. That data has to be adjusted for contributions from our galaxy, and from even more local contributions from the solar system. After all that, people start to look for small fluctuations in the signal that supposedly represent events a fraction of a second after the big bang!

Well said. As I guess you know, the electric universe physicist, Wal Thornhill, says the same:


At 6:05 mins., Wal Thornhill mentions Peter Woit's work. And going further: Relativity, Quantum Mechanics, Sting Theory, etc. are all far beyond the evidence. They are theories built on theories built on theories, then passed off by tv talking-head pseudo-scientists such as Richard Dawkins as fact.

Btw, in another presentation, Thornhill talks about the book by Halton Arp, which shows Red Shift does not mean that there was a Big Bang; see from 8:15 mins.:

Isn't it also interesting that so-called "skeptics" tackled Halton Arp's work via mathematical models. This was of course how Ed May slammed Dean Radin.

There's a pattern here. Those promoted by the system base what they do on assumptions and mathematical models, instead of just following the data.
 
Last edited:
PS: The only weakness I find in Wal Thornhill's work is that he seems too reductionalist about time. After all, I think the evidence shows precognition is real. So I guess time isn't so simple as Thornhill states.
 
Back
Top