Hi LetsEat. Great comments. I'm trying to parse out your reply here a bit, because either your or Searle's words are confusing me:
I don't think the Chinese room experiment does this, does it not actually display the opposite? Searle here says the Chinese room is a refutation of computational theory of mind
I agree that Searle's original Chinese Room was intended as, and felt to be, a refutation of the computational theory of mind, yes. Inasmuch as I understand what 'computational theory of mind' means, which I may not; I'm a programmer, not a philosopher, and we're a rather blunt-force species who don't really grasp the sort of subtleties that modern philosophy is made of.
When I said
'it assumes.. that the 'machine' of cards and instructions is powerful enough to simulate a human'
and you said:
I don't think the Chinese room experiment does this, does it not actually display the opposite?
I meant literally that: it is my understanding that the Chinese Room was a reply to the Turing Test, and therefore that it's a requirement of the machine in the Chinese Room
that it must be able to pass the Turing Test. It's not just, eg, a crossword puzzle solver. Nor is it a Twitter chatbot that's trying to sidle into your mentions and trick you into clicking on one weird link. And it's not even just a Siri or a Jeopardy solver like Watson. It must be able to carry on a conversation with a
hostile, inquistive, emotionally intelligent human
who is attempting to determine if the machine is sufficiently humanlike to pass for a human. At least that's my reading of it.
The Turing Test is not about how easily you can trick a human for a brief moment of time; if you wanted that, a dressmaker's dummy would already have passed. You have to give the human the ability to ask real, searching questions. In a real live-fire Turing Test situation, a well-trained human can
utterly destroy your basic chatbot in a couple of moves. Just ask it questions about its love life, ask it to summarise Romeo and Juliet and write a poem... and ask the same question and keyword several times to make sure all canned answers are flushed out of the system.
This is why I say that passing the Turing Test is really, really, really, REALLY hard. To pass the Turing Test you need to implement a full simulation of a human, and that includes an emotional core. An entire simulated life. An entire simulated social network of family and friends. An entire simulated knowledge of pop culture, politics, music, history.... And the presupposition of Searle's Chinese Room, that he doesn't perhaps grasp HOW difficult it is and how much data would be involved, is that the AI built out of paper can do this. It would be the equivalent of several skyscrapers worth of paper, I think. Maybe a city. Or a continent.
Of course whether your thought experiment can actually be built is probably not the core business of philosophers. Searle is telling a story to get an emotional reaction, not building a machine. But building machines that work IS the core business of programmers, and that's one reason why the Chinese Room always tends to raise our eyebrows. 'Do you realise what this thing would actually BE?'
It's not a five-line Eliza script, is what I'm trying to say.
(I'm a weird programmer because I also believe in the soul. But I appreciate rigor and correctness, because things in my world simply break if you don't have those. And though I agree with his conclusion, I've always felt that Searle, like many philosophers, was trying to pull a few fast ones.)
Next, when Searle says:
The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality.
On that, I sorta agree and sorta disagree. My feeling has always been that a human living in the Chinese Room and 'running the program' (and since it's a story that invites one to use one's imagination, again my imagination goes to the SHEER SIZE of the program, and that it would be this huge Gormenghast-like cavern-cathedral-city of vast filing towers, with a whole support staff living and breathing the maintenance of the machine, it couldn't just be this one guy)... but again, being 1) a programmer who's run simulations of a program on pencil and paper by hand and 2) an English speaker who's learning Chinese, I can tell you that in both cases, you can't simulate or live within a system without somehow coming to *feel* that system. What those codes mean, what they do in the outside world beyond your office.
I mean, this is literally what I do as a day job. I program computers. I learn symbol systems that start out as alien languages. I figure out what they
mean and do. I assign emotional states to numbers. ('404' is a bad number because it means 'the web server is down and lots of people are unhappy with me').
But yeah, the slow feeling for, eg, a foreign language, or a mathematical algorithm, or a piece of music written as mathematical notation on paper, that as a human you pick up as part of 'executing a program' that initially you don't understand, is definitely a very *different* kind of intentionality than the intentionality of the machine, which although it has some kind of 'purpose' or inevitability of its rule-following, and can 'process information', can't really be said to have 'feeling'.
We and Searle and Chalmers are all probably in agreement here, I think?
And finally, I think this is you and Wikipedia saying:
Also there is no reference to feelings(qualia) at all in the original argument, its entirely about syntax not being sufficient for semantics.
((((Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.,
https://en.wikipedia.org/wiki/Chinese_room#Complete_argument.))))
I don't see any resemblance in this argument to what you posted as it being, which is why I asked if you had a source on a Chalmers variant because these things are not similar.
On Chalmers being about subjective awareness, I think I posted already; and I confess I'm pretty shocked if Searle somewhere argues that his take on the Chinese Room is NOT about subjective awareness. That's literally what everyone else who's been arguing about it for 30-odd years thinks it's about... isn't it?
Further though, I guess I don't quite grasp the syntax/semantics distinction.
This is me as a programmer again: I live in a world where
everything is syntax, and that includes 'semantics'. In programming, 'semantics' is a term of art that means just
what happens when a program runs: the 'meaning' of a symbol is
what causes it to appear, and what appears somewhere else when it appears. But all those 'things' are generally chunks of 'syntax', ie, data. At some point down the line that data becomes 'a physical rod moves and pokes something, or a camera takes an image', but if you look at that with a programmer's eye you see: a row of atoms changed their state. That row of atoms in my office making up my table looks very much like the row of atoms in the RAM chip in my computer. So, isn't everything in the world just rows of atom-shaped bits? It's a very simplifying idea that feels very powerful.
I think Korzybski had a similar idea of 'semantics' when he developed 'General Semantics': the idea (as I loosely understand it) that the 'meaning' of a word or idea is simply 'what happens in your body, including your entire nervous system and your gut and your muscles, when you experience that word or idea'.
Everything in the physical word, that's rows of atoms, in other words, can be thought of as rows of symbols. So all semantics is just someone else's syntax, and vice versa. That's the way us programmers think. We have a hammer called computing and we see everything as a nail.
Having said that, I get that emotion, feeling, subjective awareness is
a special kind of semantics - the semantics of the human soul, not the human body - which quite possibly can't be reduced to 'rows of atoms'. And I think perhaps that's what Searle meant when he wrote 'semantics'? So perhaps this is a clash of definitions between how programmers think of 'semantics' and how philosophers think of it.
Regards, Nate