Is thinking merely the action of language mechanisms

S

Sciborg_S_Patel

#2
No.

I think the best argument via intentionality (the aboutness of thought) for immaterialism actually comes from a materialist trying to argue away thought - Alex Rosenberg in The Atheist's Guide to Reality:

Now, here is the question we’ll try to answer: What makes the Paris neurons a set of neurons that is about Paris; what make them refer to Paris, to denote, name, point to, pick out Paris?...

The first clump of matter, the bit of wet stuff in my brain, the Paris neurons, is about the second chunk of matter, the much greater quantity of diverse kinds of stuff that make up Paris. How can the first clump—the Paris neurons in my brain—be about, denote, refer to, name, represent, or otherwise point to the second clump—the agglomeration of Paris?...

A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...
 
#3
No.

I think the best argument via intentionality (the aboutness of thought) for immaterialism actually comes from a materialist trying to argue away thought - Alex Rosenberg in The Atheist's Guide to Reality:
That quote doesn't make any sense to me... it's not clear what is meant by 'paris neurons'... neurons is plural, so I'm assuming more than one... and more than one must form a pattern in spacetime... such a pattern can fire together in time... and such synchronous firing has been shown to be repeatedly correlated with sensory patterns in the external world. Paris is a shape, a word, 5 letters which has multiple contextual associations... just what is meant by the author is very unclear.

I also spotted at least one mistake...

"...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping..."​

Clearly these two assumptions are not the same... to be the same they would be written...

"...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about red octagons..."​

also, this is just wrong...

"...What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff. Physics has ruled out the existence of clumps of matter of the required sort..."​

Clumps of matter - which are patterns in space - are exactly what is required to store, share and manipulate access to information through time, within the same relative space.
 

Brian_the_bard

Lost Pilgrim
Member
#7
There is certainly a connection between thinking and language as I found out when I had to learn Swedish. There is a theory that babies don't experience any separation between themselves and their environment prior to learnng to differentiate between themselves and their parents for example. I think language affects our thinking more than we realise but it is only a steering wheel, not an engne.
 
S

Sciborg_S_Patel

#8
That quote doesn't make any sense to me... it's not clear what is meant by 'paris neurons'... neurons is plural, so I'm assuming more than one... and more than one must form a pattern in spacetime... such a pattern can fire together in time... and such synchronous firing has been shown to be repeatedly correlated with sensory patterns in the external world. Paris is a shape, a word, 5 letters which has multiple contextual associations... just what is meant by the author is very unclear.

I also spotted at least one mistake...

"...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping..."​

Clearly these two assumptions are not the same... to be the same they would be written...

"...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about red octagons..."​

also, this is just wrong...

"...What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff. Physics has ruled out the existence of clumps of matter of the required sort..."​

Clumps of matter - which are patterns in space - are exactly what is required to store, share and manipulate access to information through time, within the same relative space.
I don't think that's a mistake. He's saying for the brain to have thoughts one part of matter must represent a concept. "City" is a concept, and "Paris" is a concept given to a particular city in a particular space/time location.

So in the same way we accept the red octagon is about stopping we would need neurons to be about the concept of Paris. His argument is then that matter doesn't represent anything intrinsically and so - as a physicalist who denies consciousness as fundamental - he concludes there are no thoughts.

I also don't know how matter, on its own, is storing information? It seems to me minds project meaning on to the patterns. A diary stores information because minds accept there are letters but English isn't providing information-content to someone who only speaks Chinese. Basically without minds I don't think there are thoughts as any pattern in space can represent (almost?) anything minds desire.

But I can see the argument for needing minds + patterns to have any information/communication, perhaps even for introspection.
 
S

Sciborg_S_Patel

#9
Addendum to previous posts:

Does Mary know I experience plus rather than quus?A new hard problem

Contrast your experience with that of a monolingual Japanese person, call her Ayuko, as I say to both of you: ‘God is a friend to all’. Even though your respective perceptual experiences may be similar in many ways – they may be indiscernible in terms of their visual representational content – the fact that only one of you understands the sentence entails a phenomenological difference; what it is like to hear a language you understand is very different from what it is like to hear a language you don’t understand.
How is a mere conscious experience, something akin to ‘headaches, tickles and nausea’, able to represent so much? This question, and the worry it expresses, sound reasonable only because our philosophical tradition, in so far as it accepts the reality of conscious experience at all, has become used to dealing with a caricature of consciousness: what it’s like to see red, what it’s like to taste lemons, what it’s like to feel pain. It is customary for philosophers to talk as though such‘raw feels’ exhausted the nature of consciousness.

But it is crazy to think that conscious experience is exhausted by raw feels. Even if we stick to perceptual experience, its representational powers vastly outstrip what the crude raw feels model allows for.We experience sad faces, angry rampages, gluttonous displays of eating, joyous dancing, not just in the sense that such things are the objects of our experience, but in the sense that our experience represents them as such. We see the lion as about to pounce, the huge man as capable of overpowering, the mother’s caress as expressing compassion. If I am having an experience asof a face filled with angst, then any phenomenal duplicate of mine – my brain in a vat twin or my Cartesian ego twin – is also having an experience as of a face filled with angst. Our perceptual experience is conceptually saturated , to borrow a phrase from P. F. Strawson (1979). Being a functional duplicate of a human being, but one whose conscious experience was limited to raw feels, would be little better than being a zombie. Once the depth and remarkable versatility of human consciousness is fully appreciated, there is little difficulty in allowing that human conscious experience is capable of representing the semantic properties of words, complex though they are.
The demon went home and wrote up the experiment. He read over it and reflected. ‘Well that’s it’, he thought, ‘I’ve finally shown conclusively that physicalism is false. In the second part of the experiment Mary learnt a new fact about Cuthbert at t: she learnt the meaning Cuthbert’s perceptual experience represents ‘plus’ as having. In the first part of the experiment she already knew all the physical facts about Cuthbert at t, so this new fact must be a non-physical fact about Cuthbert at t. Therefore there are non-physical facts; therefore physicalism is false’.

In the first experiment the demon had come to doubt the findings of his experiment, on the grounds that he couldn’t rule out that Mary had merely learnt new know how, or new ways of thinking about a fact she already knew. But in the case of this new experiment, these options seemed implausible. It was impossible to deny that, in learning that Cuthbert’s perceptual experience represents ‘plus’ to mean plus, rather than quus or zuus, Mary learnt new information, in the sense of ruling out genuine possibilities, i.e. the possibility that Cuthbert’s experience represents ‘plus’ to mean quus, and the possibility that Cuthbert’s perceptual experience represents ‘plus’ to mean zuus.
I'm not convinced that even the first argument from Mary's experience of "redness" is explicable by physicalism. See Feser's When Frank Jilted Mary for an argument about this.
 
#10
I don't think that's a mistake. He's saying for the brain to have thoughts one part of matter must represent a concept. "City" is a concept, and "Paris" is a concept given to a particular city in a particular space/time location.

So in the same way we accept the red octagon is about stopping we would need neurons to be about the concept of Paris. His argument is then that matter doesn't represent anything intrinsically and so - as a physicalist who denies consciousness as fundamental - he concludes there are no thoughts.

I also don't know how matter, on its own, is storing information? It seems to me minds project meaning on to the patterns. A diary stores information because minds accept there are letters but English isn't providing information-content to someone who only speaks Chinese. Basically without minds I don't think there are thoughts as any pattern in space can represent (almost?) anything minds desire.

But I can see the argument for needing minds + patterns to have any information/communication, perhaps even for introspection.
He clearly said Paris = Paris, and Red Octagon = Stopping ...there is something wrong with that. What you are doing is making assumptions and filling in the blanks... like suggesting the association with the patterns on the screen should be associated with 'city'. But all he did in your quote is say Paris = Paris - which is correct and has no other association.

You have to be really really accurate and clear when you start talking about these issues... and he's not.

I accept some of what you say, it is a loop that needs observers, but he's still wrong in what he says.

There is little doubt that a pattern in the brain is correlated with a pattern in the every day external world. It's also clear that both patterns in the brain, and patterns you've made say... in a diary are both in the same every day external world - there is nothing special about either. There is however a difference in their capabilities.

From one spacetime perspective... When I accelerate matter with energy in space-time, to say... write in my diary... I'm adding to nature... and I'm doing those amazing things that people talk about on here every day.... I'm manipulating, storing and sharing information so that it can be accessed again in spacetime... by me, or third parties. It is a totally and utterly amazing ability...

Staring at those patterns in my diary again... allows me a way of more accurately recreating the original patterns in my brain when I wrote in the diary, and allows me to more accurately access that past information again from somewhere else in spacetime. And crucially for me... my experience comes because of a second mechanism, which makes all alike patterns add up.

But in my brain, patterns can interfere with one another, and the networks through which the patterns were originally created will change through experience (learning).

The reason we can share here in spacetime, is because we really are sharing. We can manipulate stuff in this place (in the brain or the everyday world - they are both the same), we can store in this place (in the brain or the everyday world - they are both the same), and we can share with other observers (systems) in this place (in the brain or the everyday world - they are both the same).
 
#11
I'm not convinced that even the first argument from Mary's experience of "redness" is explicable by physicalism. See Feser's When Frank Jilted Mary for an argument about this.
If you know anything about colour... you know that the arguments got problems from the start...

If you know anything about Heisenberg's principle of uncertainty and relativity... you know it's impossible in principle, to know everything.
 
#12
First, I must admit that I haven't listened to the video (at least yet) - the speaker started off in a an utterly pedantic voice.....

To me, thinking more or less implies understanding stuff. I am sure there is plenty to understand in Paris, but I don't think this example is very helpful. I prefer to think about what it means to understand a simple maths concept.

I remember walking with my Dad and he was testing me on my multiplication tables. I asked him why it was that when you multiplied two numbers, you got the same answer if you swapped them round (why multiplication is commutative, in maths parlance). After all, there was no obvious reason why the 7th number in the fifth table simply had to be equal to the 5th number in the seventh table! I don't think he gave a very useful answer, because I went on thinking about this issue as we continued on the walk, until I suddenly imagined a crate of milk bottles (modern kids don't realise that milk comes in crates - some of them even have the horribly unhygienic notion that it comes from cows!). To work out the number of bottles in a rectangular crate, you multiply the number down two sides - and the number of bottles in the crate (assuming it is full) must always be the same - voila - I understood!

To me, talk of neural nets just disguises the fact that we are talking about computer understanding. Some people would say that their computer understands that multiplication is commutative because it gives the same answer regardless of order. However, that is the type of reasoning I rejected at a rather young age with my Dad, but how about an algebra program. These can simplify expressions like A*B - B*A, so they have to understand multiplication, don't they? I'd argue that they don't, because multiplication is still a black box operation to them - it doesn't mean anything. If for example you applied such a program to matrices (where multiplication doesn't commute) such a program would effortlessly get the wrong answer.

Yet how do you program a computer to think in terms of invented analogies such as crates of milk bottles. Notice that that idea isn't linguistic in any way at all.

To me, thinking is more fundamental than language. Indeed, we repeatedly hear from people who have met others while in an NDE, that people out there don't communicate in words.

David
 
S

Sciborg_S_Patel

#13
He clearly said Paris = Paris, and Red Octagon = Stopping ...there is something wrong with that. What you are doing is making assumptions and filling in the blanks... like suggesting the association with the patterns on the screen should be associated with 'city'. But all he did in your quote is say Paris = Paris - which is correct and has no other association.
I was thinking it was like this:

Paris Neurons -> about -> Paris

Red Octagon -> about -> Stopping

The problem I can see is the latter would also, under his argument, depend on neurons correlating the red octagons and stopping. But I have to admit I'm not seeing the problem with the original statement?

I have been thinking about patterns some more, and how they - and the substance used to make them - may be necessary but not sufficient. So minds might need to always be embodied, which doesn't necessarily mean embodied in this world or in the same way. The notion of subtle bodies has intrigued me since they got some attention in Beyond Physicalism.
 
#14
I was thinking it was like this:

Paris Neurons -> about -> Paris

Red Octagon -> about -> Stopping

The problem I can see is the latter would also, under his argument, depend on neurons correlating the red octagons and stopping.
Yep, the statements are not alike... there ain't a lot of point in going any further for me, although I will say that this part is just gobbledygook...

"...What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff. Physics has ruled out the existence of clumps of matter of the required sort..."
You obviously think that statement is significant in some way, but I haven't the foggiest why? The claim is totally wrong, sensory patterns from outside the brain (i.e. the external objective world), do correlate with firing patterns inside the brain, that's just a fact.

As for terms like embodiment, subtle bodies, substance, minds, world... they ain't specific enough for me to understand the point you''re making?
 
Last edited:
Top