How the light gets out

I have a old critique from another forum I can post if people want to discuss it again?
 
Sure. As long as it isn't an extended argument from incredulity... ;)

You'll have to be the judge of that, since I'm not sure what that means though I suspect I don't need an extended argument when I can just quote the man himself:

It seems crazy to insist that the puppet’s consciousness is real. And yet, I argue that it is. The puppet’s consciousness is a real informational model that is constructed inside the neural machinery of the audience members and the performer. It is assigned a spatial location inside the puppet. The impulse to dismiss the puppet’s consciousness derives, I think, from the implicit belief that real consciousness is an utterly different quantity, perhaps a ghostly substance, or an emergent state, or an oscillation, or an experience, present inside of a person’s head. Given the contrast between a real if ethereal phenomenon inside of a person’s head and a mere computed model that somebody has attributed to a puppet, then obviously the puppet isn’t really conscious. But in the present theory, all consciousness is a “mere” computed model attributed to an object. That is what consciousness is made out of. One’s brain can attribute it to oneself or to something else. Consciousness is an attribution…

“In some ways, to say, ‘this puppet is conscious’ is like saying, ‘This puppet is orange.’ We think of color as a property of an object, but technically, this is not so. Orange is not an intrinsic property of my orangutan puppet’s fabric. Some set of wavelengths reflets from the cloth, enters your eye, and is processed in your brain. Orange is a construct of the brain. The same set of wavelengths might be perceived as reddish greenish or bluish, depending on circumstances…To say the puppet is orange is shorthand for saying, ‘A brain attributed orange to it.’ Similarly, according to the present theory, to say that the puppet is conscious is to say, ‘A brain has constructed the informational model of awareness and attributed it to that tree.” To say that I myself am conscious is to say, ‘My own brain has constructed an informational model of awareness and attributed it to my body.’ These are all similar Acts. They all involve a brain attributing awareness to an object.”
~ Michael Graziano (2013), Consciousness and the Social Brain, p. 208

Anyway, my 2014 critique from another site with some amendments (I'm more sympathetic to IIT after discussion with Neil):

10/2014 ->

Works referenced:

What Consciousness Is Not by Raymond Tallis

There Are No Easy Problems of Consciousness by EJ Lowe

Is the Brain a Digital Computer by John Searle



"Kevin is the perfect introduction. Intellectually, nobody is fooled: we all know that there’s nothing inside. But everyone in the audience experiences an illusion of sentience emanating from his hairy head. The effect is automatic: being social animals, we project awareness onto the puppet. Indeed, part of the fun of ventriloquism is experiencing the illusion while knowing, on an intellectual level, that it isn’t real."

Except there are times when you don't actually believe in the illusion, as he mentions later. And it's not clear going along with a puppet show is the same thing as "projecting awareness". This reflects an attempt to solve a different problem than the Hard Problem.

"Many thinkers have approached consciousness from a first-person vantage point, the kind of philosophical perspective according to which other people’s minds seem essentially unknowable. And yet, as Kevin shows, we spend a lot of mental energy attributing consciousness to other things. We can’t help it, and the fact that we can't help it ought to tell us something about what consciousness is and what it might be used for. If we evolved to recognise it in others – and to mistakenly attribute it to puppets, characters in stories, and cartoons on a screen — then, despite appearances, it really can’t be sealed up within the privacy of our own heads."

Thinkers approach consciousness from a first-person vantage point because that's part of what consciousness is. And I'd need more proof we spend a lot of mental energy attributing consciousness to other things. I'm not thinking my desk or my lamp are conscious, or anything else in this room (barring pests). Again, accepting a narrative temporarily is not the same as a mistaken attribution. In any case, the leap that mistaking consciousness where it isn't means we don't have it either is unwarranted.

"...In the computer age, it is not hard to imagine how a computing machine might construct, store and spit out the information that ‘I am alive, I am a person, I have memories, the wind is cold, the grass is green,’ and so on. But how does a brain become aware of those propositions? The philosopher David Chalmers has claimed that the first question, how a brain computes information about itself and the surrounding world, is the ‘easy’ problem of consciousness. The second question, how a brain becomes aware of all that computed stuff, is the ‘hard’ problem."

IMO Chalmers actually gave up too much - for example, it's not clear the brain holds memories or is capable of holding meaning and representation (intentionality). As EJ Lowe states, if you go with Chalmers you're basically going with "functionalism + qualia".

There's also a problem with the idea that the brain does "information processing" in the way a computer does - as Searle notes brains aren't digital computers.

"I believe that the easy and the hard problems have gotten switched around. The sheer scale and complexity of the brain’s vast computations makes the easy problem monumentally hard to figure out. How the brain attributes the property of awareness to itself is, by contrast, much easier. If nothing else, it would appear to be a more limited set of computations."

Actually, since the "easy" problems require phenomenal awareness & intentionality, to me this is already wrong. We also see the aforementioned confusion of issues repeated here -> that the brain attributes awareness to itself. But this is just saying the brain somehow draws consciousness out of the ether and associates itself with awareness. The old chestnut still applies - If consciousness is an illusion, who precisely is being fooled?

"Too much information comes in from the outside world to process it all equally, and it is useful to select the most salient data for deeper processing. Even insects and crustaceans have a basic version of this ability to focus on certain signals. Over time, though, it came under a more sophisticated kind of control — what is now called attention.
"

I'm not sure awareness is just information processing, as like Searle I'm not convinced brains "process information" in the computational sense. Additionally it seems to me attention requires awareness as a necessary condition, and is tied up intimately with it to the point we might only be able to speak about them as separate things in the abstract. So it's not clear why awareness would be a kind of "control" that becomes more "sophisticated".

"Attention requires control. In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing. Think of a military general with his model armies arrayed on a map: they provide a simple but useful representation — not always perfectly accurate, but close enough to help formulate strategy."


Having an internal model that is sufficient for a computer to manipulate its environment to me seems different from mental control. To me this is conflating different things. And having representations requires intentionality, something different from programs that manipulate syntax.

More than control, a necessary condition for attention requires awareness which then focuses on certain subjective experiences and/or thoughts about things (intentionality) to the exclusion of others.

It seems to me Graziano can believe in his theory is because he's committing the exact mistake he warns us about - attributing awareness to things that don't have it.

"The brain will attribute a property to itself and that property will be a simplified proxy for attention. It won’t be precisely accurate, but it will convey useful information. What exactly is that property? When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself — the property of being conscious, or aware, of something. Why? Because that attribution helps to keep track of the ever-changing focus of attention."

So basically he assumed awareness, and then came up with a way to prove something he'd already tacitly assumed in the construction of his theory.

Beyond that I don't know what a "simplified proxy of attention" is. And consciousness, as awareness, provides us a lot of information that isn't useful to particular tasks.

'I call this the ‘attention schema theory’. It has a very simple idea at its heart: that consciousness is a schematic model of one’s state of attention. Early in evolution, perhaps hundreds of millions of years ago, brains evolved a specific set of computations to construct that model. At that point, ‘I am aware of X’ entered their repertoire of possible computations.'

Save consciousness is awareness (or at least awareness is the preliminary form of consciousness), and this awareness is the very thing Graziano assumed when he started. A schema is a model, which means such a model would be thoughts about things. But It's intentionality that refers to thoughts about things, though in practice I'm not sure you can separate intentionality and phenomenal experience. As EJ Lowe notes in There Are No Easy Problems of Consciousness, when you see a red eraser on top of a brown book, you are experiencing color and having thoughts referring to two distinct objects. (More on this later)

But regardless of the degree to which intentionality and consciousness are intertwined, I don't think Graziano provides an explanation for either in attempting to explain the latter in terms of the former. And if I'm correct and you can't separate them then the theory is nonsensical.

"And then what? Just as fins evolved into limbs and then into wings, the capacity for awareness probably changed and took on new functions over time. For example, the attention schema might have allowed the brain to integrate information on a massive new scale....An internal model of attention therefore collates data from many separate domains. In so doing, it unlocks enormous potential for integrating information, for seeing larger patterns, and even for understanding the relationship between oneself and the outside world."

I think this whole idea of Integrated Information needs to be better examined (2016 note - thanks to Neil for elaborating on this in the other thread though it's not clear to me he's referencing IIT here?). Why so many people associate information with an explanation of consciousness isn't clear to me, and I suspect this also runs into similar confusions as the ones Graziano seems to have. But for now I think we can just note that there's no reason given as to why establishing larger patterns via computational modeling would allow us to experience the qualia of seeing red or smelling garlic, let alone the qualia of feeling like you're running late or a sense of linear time.

Additionally I don't think brain is necessarily integrating information in the way he seems to believe.

"Such a model also helps to simulate the minds of other people. We humans are continually ascribing complex mental states — emotions, ideas, beliefs, action plans — to one another. But it is hard to credit John with a fear of something, or a belief in something, or an intention to do something, unless we can first ascribe an awareness of something to him. Awareness, especially an ability to attribute awareness to others, seems fundamental to any sort of social capability."

But simulating expectations of how others might act is not the Hard Problem. This is where I wonder if Graziano is increasingly aware that his concept of "awareness" has wandered away from the problem he claimed to have a solution for. Of course part of the problem is he's already assumed awareness, which means he's snuck in consciousness from the beginning, so now his non-solution only buries the original assumption.

"We paint the world with perceived consciousness. Family, friends, pets, spirits, gods and ventriloquist’s puppets — all appear before us suffused with sentience. But what about the inside view, that mysterious light of awareness accessible only to our innermost selves? ? A friend of mine, a psychiatrist, once told me about one of his patients. This patient was delusional: he thought that he had a squirrel in his head...

...We can ask two types of questions. The first is rather foolish but I will spell it out here. How does that man’s brain produce an actual squirrel? How can neurons secrete the claws and the tail? Why doesn’t the squirrel show up on an MRI scan? Does the squirrel belong to a different, non-physical world that can’t be measured with scientific equipment? This line of thought is, of course, nonsensical. It has no answer because it is incoherent.

The second type of question goes something like this. How does that man’s brain process information so as to attribute a squirrel to his head? What brain regions are involved in the computations? What history led to that strange informational model? Is it entirely pathological or does it in fact do something useful?"


We can further see how Graziano has missed the point with this wide digression that he thinks is going to end with a clever conclusion. Whether or not the man has a squirrel in his head, there is something that it is like to be the man who believes in such a thing. This is the question we want answered. Knowing which neurons lead to this "strange informational model" is not a solution to the question of consciousness.

Of course Graziano doesn't seem to get this, and in his ignorance hits us with what should be a confession:

"So far, most brain-based theories of consciousness have focused on the first type of question. How do neurons produce a magic internal experience? How does the magic emerge from the neurons? The theory that I am proposing dispenses with all of that. It concerns itself instead with the second type of question: how, and for what survival advantage, does a brain attribute subjective experience to itself? This question is scientifically approachable, and the attention schema theory supplies the outlines of an answer."

So now he's not claiming to solve the Hard Problem. He apparently just wants to know why the brain attributes subjective experience to itself. IMO this is confusing at best and at worst nonsensical. The person possessing the brain has subjective experiences, there is no attribution there. The seeming is being when it comes to consciousness, otherwise there wouldn't be a Hard Problem at all.

How does a brain without consciousness attribute anything to itself?

For awhile after the above, Graziano talks about how the brain ends up with a conscious experience, and he wants to figure out how the neurons lead up to consciousness (Arrow A), and how consciousness influences neurons (Arrow B). I think this lead up, and flow back to, may ultimately have problems but I don't think this is a bad way to look at things. Sadly, Graziano then uses this concept of two Arrows to pull a magician trick:

"Consciousness isn’t a non-physical feeling that emerges. Instead, dedicated systems in the brain compute information. Cognitive machinery can access that information, formulate it as speech, and then report it. When a brain reports that it is conscious, it is reporting specific information computed within it. It can, after all, only report the information available to it. In short, Arrow A and Arrow B remain squarely in the domain of signal-processing. There is no need for anything to be transmuted into ghost material, thought about, and then transmuted back to the world of cause and effect."

This reminds me of an old math joke called Proof by Lunch Time. You work your way up to the part of a proof that you can't solve in the morning. Then everyone breaks for lunch. When they come back, you keep going as if the part you couldn't solve was worked out. To me this is what Graziano has done, pretending Arrow A (neurons producing consciousness) and Arrow B (neurons allowing us to report on conscious states) can solve the Hard Problem just by being juxtaposed in a linear time framework. Yet he never actually solved the problem of consciousness at all AFAICTell.

Beyond that, I dislike this language of "signal processing". The brain produces consciousness states - the suggesting that something computational is happening seems like taking an abstraction for whatever is really going on. Again see Searle's critique of "information processing".

"Some people might feel disturbed by the attention schema theory. It says that awareness is not something magical that emerges from the functioning of the brain. When you look at the colour blue, for example, your brain doesn’t generate a subjective experience of blue. Instead, it acts as a computational device..."

I feel disturbed because Graziano didn't solve the problem. And the brain is not a computational device, since computation is an attribution we make to certain systems. As Searle notes, there is no computation in physics - things happen, or don't happen, and we use that fact to make computers by attributing a set of states to 1s and the complement of that set to 0s.

"I admit that the theory does not feel satisfying; but a theory does not need to be satisfying to be true. And indeed, the theory might be able to explain a few other common myths that brains tell themselves. What aboutout-of-body experiences? The belief that awareness can emanate from a person’s eyes and touch someone else? That you can push on objects with your mind? That the soul lives on after the death of the body? One of the more interesting aspects of the attention schema theory is that it does not need to turn its back on such persistent beliefs. It might even explain their origin."

He didn't really solve the problem of subjective experience, but since he claims to explain away parapsychology I guess it's supposed to impress people.

"The heart of the theory, remember, is that awareness is a model of attention, like the general’s model of his army laid out on a map. The real army isn’t made of plastic, of course. It isn’t quite so small, and has rather more moving parts. In these respects, the model is totally unrealistic. And yet, without such simplifications, it would be impractical to use.

If awareness is a model of attention, how is it simplified? How is it inaccurate? Well, one easy way to keep track of attention is to give it a spatial structure — to treat it like a substance that flows from a source to a target."

Awareness isn't a model of attention, or at the least it's not clear what that means. Why does having subjective experiences by default "model" one's attention.

It seems to me one of the important points about one's individual consciousness is that it isn't extended in space. Awareness isn't simply 0-dimensional, it's non-spatial or at the least without consciousness we have no experience of space. Not only is it hard to see why making a spatial model leads to subjective experiences from the real world, the very idea that spatial representation leads to that which arguably unextended spatially suggests (to me anyway) this isn't the best way to solve the Hard Problem.

"Many of our superstitions — our beliefs in souls and spirits and mental magic — might emerge naturally from the simplifications and shortcuts the brain takes when representing itself and its world. This is not to say that humans are necessarily trapped in a set of false beliefs. We are not forced by the built-in wiring of the brain to be superstitious, because there remains a distinction between intuition and intellectual belief. In the case of ventriloquism, you might have an unavoidable gut feeling that consciousness is emanating from the puppet’s head, but you can still understand that the puppet is in fact inanimate. We have the ability to rise above our immediate intuitions and predispositions."

Again to me it seems he's hoping people won't consider how the original problem was swept under the rug. What's interesting is that he says we apparently have an "unavoidable gut feeling that consciousness is emanating from the puppet's head". But this isn't at all clear to me. Did Graziano do any psychological studies before deciding this was true? Did he interview people coming out of ventriloquist shows? Is this just intuition on his part, or what he calls an"intellectual belief"?

"To attribute awareness to oneself, to have that computational ability, is the first step towards attributing it to others. That, in turn, leads to a remarkable evolutionary transition to social intelligence. We live embedded in a matrix of perceived consciousness. Most people experience a world crowded with other minds, constantly thinking and feeling and choosing. We intuit what might be going on inside those other minds."


The brain is not a digital computer. And to attribute attention to one's self, one would have to have some concept of awareness...at which point one is aware. Again it seems to me he snuck in the consciousness he wanted to explain.

"And so, whether or not the attention schema theory turns out to be the correct scientific formulation, a successful account of consciousness will have to tell us more than how brains become aware. It will also have to show us how awareness changes us, shapes our behaviour, interconnects us, and makes us human."

Finally, at the end, he says something immaterialists and materialists can plausibly agree on.
 
Back
Top