Integrated Information Discussion Thread

  • Thread starter Sciborg_S_Patel
  • Start date
OK - having chewed into IIT a very little, I do get the feeling that a deception is being played on me!

I mean, the description of a high dimensional qualia space containing shapes that specify experience, sounds at first plausible - the geometry of high dimensional spaces is obviously rich, but the problem is that to actually have an experience you have to somehow appreciate that shape (which is a separate process), otherwise the wonderful shape in qualia space is no different from some wonderful structure created by my computer during its internal process, and completely unobserved!

The deception for me begins with the assertion, "consciousness is integrated information", because information integrated or otherwise, is just a mathematical measure.

Saying that Chalmer's hard problem is counter productive, is all very well, but it doesn't get round his point. I mean, suppose you produce a computer system with a certain amount of IIT, are we to believe that that system really and literally has experiences? In a Gedanken sense, a simulation of the entire human brain would certainly have IIT, and you would have to say that such a simulation is conscious if you believe this theory.

So let's do my usual trick and imagine a computer undergoing some kind of contemplation that requires no input, but generates an output O at its end. By assumption, it also undergoes experiences in the process. To be specific, imagine that the program was a simulation of the brain of a man remembering a past love affair from its exciting beginning to its bitter end. The output is a few sentences summing up his experience. We can formalise this as

P+C=>O

But that formalisation doesn't include the actual experience, which the computer has as in addition.

Now consider that the above relation is actually rather like a mathematical theorem - it is true whether the program is run just once, or 100000 times, or indeed zero times. Indeed, the physical computer is only really necessary at all because it can crunch through more states than we could using pencil and paper (cf Searle's Chinese Room argument) That isn't really a paradox if the program runs as most do - without generating internal experiences - but what does it mean for a theorem (true for all of space-time) to generate experiences? The concept of the experience ends up detached from a specific point in time altogether!

David
 
OK - having chewed into IIT a very little, I do get the feeling that a deception is being played on me!

I mean, the description of a high dimensional qualia space containing shapes that specify experience, sounds at first plausible - the geometry of high dimensional spaces is obviously rich, but the problem is that to actually have an experience you have to somehow appreciate that shape (which is a separate process), otherwise the wonderful shape in qualia space is no different from some wonderful structure created by my computer during its internal process, and completely unobserved!

The deception for me begins with the assertion, "consciousness is integrated information", because information integrated or otherwise, is just a mathematical measure.

Yes, this is correct, and I addressed this issue in my posts in this thread. Tononi is silent on the metaphysics, but honestly in his writings I believe he realizes this, but chooses not to comment publicly. Just the way his book was written, and some of the mentions of how IIT could explain a pure consciousness mystical experience makes me think this. But if we admit that fundamental quantum fields have the capacity for experience and that the maximally irreducible conceptual structure of integrated information is experienced, then there is no problem.


David Bailey said:
Saying that Chalmer's hard problem is counter productive, is all very well, but it doesn't get round his point. I mean, suppose you produce a computer system with a certain amount of IIT, are we to believe that that system really and literally has experiences? In a Gedanken sense, a simulation of the entire human brain would certainly have IIT, and you would have to say that such a simulation is conscious if you believe this theory.

Be careful, Integrated Information Theory does not say that a simulation of the brain would result in experience. In fact they say it would not. A computer is not structured to give rise to experience. If you build a robot designed to maximize integrated information, and it is based on a non-linear system that incorporates quantum uncertainties (my qualification), then I think, yes, it would have actual experience.

David Bailey said:
So let's do my usual trick and imagine a computer undergoing some kind of contemplation that requires no input, but generates an output O at its end. By assumption, it also undergoes experiences in the process. To be specific, imagine that the program was a simulation of the brain of a man remembering a past love affair from its exciting beginning to its bitter end. The output is a few sentences summing up his experience. We can formalise this as

P+C=>O

But that formalisation doesn't include the actual experience, which the computer has as in addition.

Now consider that the above relation is actually rather like a mathematical theorem - it is true whether the program is run just once, or 100000 times, or indeed zero times. Indeed, the physical computer is only really necessary at all because it can crunch through more states than we could using pencil and paper (cf Searle's Chinese Room argument) That isn't really a paradox if the program runs as most do - without generating internal experiences - but what does it mean for a theorem (true for all of space-time) to generate experiences? The concept of the experience ends up detached from a specific point in time altogether!

David

Conscious experience is the experience of the information. I argue that this arises through wavefunction collapse.
 
Neil - Have you read Stapp's chapter in Beyond Physicalism?

After reading your post seems I must return to it...
 
Be careful, Integrated Information Theory does not say that a simulation of the brain would result in experience. In fact they say it would not. A computer is not structured to give rise to experience. If you build a robot designed to maximize integrated information, and it is based on a non-linear system that incorporates quantum uncertainties (my qualification), then I think, yes, it would have actual experience..

So you don't accept consciousness=IIT without adding this extra condition to it?

David
 
Correct. Within the current classical physics frame, there is no way IIT would account for consciousness.
Do Tononi & Koch agree?
Be careful, Integrated Information Theory does not say that a simulation of the brain would result in experience. In fact they say it would not. A computer is not structured to give rise to experience. If you build a robot designed to maximize integrated information, and it is based on a non-linear system that incorporates quantum uncertainties (my qualification), then I think, yes, it would have actual experience.

Where does the difference arise - I mean the computer could simulate the IIT-rich brain at a very low level (in principle). This means that the conscious brain cannot be simulated in principle (which is something that Penrose claims, and which I agree is likely to be true). That makes Penrose go looking for exotic physics that can't be simulated in principle!

What indeed would the simulation do - generate the same O without the experience? I somehow think another paradox could be made out of that situation.

Regarding panpsychism, I agree that different experiences can't be attached to QM-identical particles - protons say, but where would it be stored in the matter fields - in the Hamiltonian? I mean at that level there doesn't seem to be anywhere to actually store much information (but QFT is getting over my horizon somewhat). I mean Schroedinger's equation can in principle be applied to the whole earth (say) with all its humans full of IIT - I assume the same could be said for QFT (it is only for practical reasons that you factor it out into say an isolated hydrogen atom). So where do you store the IIT - in the wavefunctions? Also what exactly has to be non-linear - surely you don't mean one of the QM equations?

David
 
Do Tononi & Koch agree?


Where does the difference arise - I mean the computer could simulate the IIT-rich brain at a very low level (in principle). This means that the conscious brain cannot be simulated in principle (which is something that Penrose claims, and which I agree is likely to be true). That makes Penrose go looking for exotic physics that can't be simulated in principle!

What indeed would the simulation do - generate the same O without the experience? I somehow think another paradox could be made out of that situation.

Regarding panpsychism, I agree that different experiences can't be attached to QM-identical particles - protons say, but where would it be stored in the matter fields - in the Hamiltonian? I mean at that level there doesn't seem to be anywhere to actually store much information (but QFT is getting over my horizon somewhat). I mean Schroedinger's equation can in principle be applied to the whole earth (say) with all its humans full of IIT - I assume the same could be said for QFT (it is only for practical reasons that you factor it out into say an isolated hydrogen atom). So where do you store the IIT - in the wavefunctions? Also what exactly has to be non-linear - surely you don't mean one of the QM equations?

David

Koch and Tononi would not agree with what I said, or rather they do not think what I said is the case.

If you mean simulating a brain on a feed-forward computer, then IIT is quite explicit that there would be no experience of that simulation. A simulation doesn't mean anything--it is the structure of the processing that matters. If I simulate a world like in a video game, that doesn't mean that a world exists.

There is a further problem that I think simulating a brain on a classical computer will be impossible just based on computational power alone.

The information being processed in the brain is stored in memory, but more fundamentally all quantum information describing what occurs is "stored" nonlocally.

Non-linear in the brain comes from indeterminacy at the synaptic level with uncertainty of calcium ion exchange. This uncertainty is amplified up to macro brain patterns.
 
I should clarify that when I talk of quantum uncertainties, I mean uncertainties that are intrinsic to the system, and therefore amenable to downward causation.

For example, one could say that a sensitive photodiode could be made to involve quantum uncertainties, since if it can detect a single photon, then there would be uncertainty of whether or not the photon hit the photodiode, and this would be amplified up to macro states of the photodiode complex. However, this uncertainty is outside of the system itself and is not able to be controlled via downward causation by the system itself.

The difference with a brain is that the quantum uncertainties arise within the brain at the synaptic level, allowing for a profusion of potential patterns of electrical activity that can then be acted upon by an emergent consciousness, giving it intrinsic causal power.
 
Last edited:
Koch and Tononi would not agree with what I said, or rather they do not think what I said is the case.
So it is interesting that the authors of this idea were happy with consciousness=IIT in a purely classical way.
If you mean simulating a brain on a feed-forward computer, then IIT is quite explicit that there would be no experience of that simulation. A simulation doesn't mean anything--it is the structure of the processing that matters. If I simulate a world like in a video game, that doesn't mean that a world exists.
Exactly, the simulation is distinct from the physical reality. However in that case, it is easy to see what is missing in the simulation. The video game doesn't involve really killing people (or whatever), it simply renders scenes based on crude physics.

However, once you postulate that consciousness=information (of a particular kind) it is far less obvious what the simulation lacks.

I am still trying to get to grips with the concept of Integrated Information, but it seems to require the idea of breaking the system into its parts. Now clearly, if you think of a computer system, breaking into parts will depend critically on what level you do it. If you break it into parts at the level of chips (or sections of chips) the IIT migt be low, but if you break it apart at the level of software structures, the parts would be very different and IIT would be higher. For example, in a linked list, two elements might be considered adjacent at the level of software, but be stored at widely different locations in the chips.

I disagree with the conclusion, but he raises an excellent point: one cannot derive the experiential from the non-experiential. It can never be an emergent property or epiphenomenal. It is a category error.
Exactly and the statement consciousness=IIT is a category error.
So does that mean that everything is experiential? Rather than panpsychism, this is panexperientialism, which I think at least is more coherent than panpsychism (since it attributes mental qualities to everything, but also it can be described as everything is conscious, but that starts to blur the line between the two terms and I think is more appropriately called panexperientialism).
How does panexperientialism fare better when it comes to the objection that electrons (and other fundamental particles) are QM-identical? I mean, you can't have a happy electron here and a sad electron over there, because the wave function of the joint system will only change sign (with no observable effects) if you swap them over!

David
 
I disagree with the conclusion, but he raises an excellent point: one cannot derive the experiential from the non-experiential. It can never be an emergent property or epiphenomenal. It is a category error.

Here's Tononi on the question:

For a macro-level to beat a micro-level, despite the much larger number of states that are available to the micro-level, some features are especially important: i) the presence of some degree of indeterminacy at the micro-level (due to intrinsic noise or to perturbations from the environment); ii) many-to-one mapping, such that many input states can produce the same output state, giving rise to irreversibility; iii) macro-mechanisms structured in such a way that they group noisy micro-states together in an advantageous manner; iv) the fact that, from the intrinsic perspective of the macro-system, all possible perturbations (i.e. counterfactuals) must be conceived as applied to macro-states. This means that the actual distribution of micro-states underlying the macro-level distribution will be different from their micro-level maximum entropy distribution, thus accounting for emergence without violating supervenience. In summary, the level at which ‘things’ really exist in and of themselves, i.e. from the intrinsic perspective, in both space and time, is the level at which maxΦMIP is maximized – that is, the level at which ‘causal power’ is maximal. In other words, what really exists (and excludes any other level) is what makes the most difference – and this level is not necessarily the microlevel as is often assumed in reductionist accounts.

(Integrated information theory of consciousness: an updated account)
 
IIT doesn't consider consciousness to be emergent. Rather it is a fundamental property.

I know he says that, but there are also many examples that say it is emergent within IIT:


Integrated Information Theory 3.0 said:
At the system level, the integration postulate says that only conceptual structures that are integrated can give rise to consciousness. (pg11)

The exclusion postulate at the level of systems of mechanisms says that only a conceptual structure that is maximally irreducible can give rise to consciousness (pg13)

An experience (i.e. consciousness) is thus an intrinsic property of a complex of elements in a state: how they constrain – in a compositional manner – its space of possibilities, in the past and in the future. (Pg 14)

once a phenomenological analysis of the essential properties (axioms) of consciousness has been translated into a set of postulates that the physical mechanisms generating consciousness must satisfy (pg 15)

even if all the neurons in a main complex were inactive (or active at a low baseline rate), they would still generate consciousness (pg 17)

To me, it makes sense in how I interpret it, but I do not understand how it can make sense in a classical ontology.
 

Thanks for this! I have not read this paper so I am excited to read the latest.

With respect to the quoted portion, it just doesn't make sense in classical mechanics. I don't care if there is noise, because that noise is fixed and that is what creates the macrostate. There is also no way for intrinsic holism in classical mechanics. A computer, for example, can supervenes on lower levels within itself, but this is not irreducible, since it is particular arrangements of signals found in codes that supervenes on the lower levels. Or another common example is that of a wheel, where the wheel "causes" the particles within it to move in a certain way. This type of supervenience is not downward causation. This is not the same as a maximally irreducible conceptual structure that is causally effective on the state of the brain.

I have no problem with the concept of downward causation and a holistic maximally irreducible conceptual structure within a Von Neumman ontology, though, since it does not adhere to causal closure of the physical. All other quantum interpretations adhere to causal closure of the physical which makes any consciousness causally inert. Within classical mechanics, the brain is only able to be "causal" in the sense that a computer or a machine is causal, but that is not what IIT is suggesting, since it is saying a maximally irreducible conceptual structure is what is causal. Within classical mechanics, you may have epistemic irreducibility, but not ontological, and downward causation is forbidden.
 
So it is interesting that the authors of this idea were happy with consciousness=IIT in a purely classical way.

Exactly, the simulation is distinct from the physical reality. However in that case, it is easy to see what is missing in the simulation. The video game doesn't involve really killing people (or whatever), it simply renders scenes based on crude physics.

However, once you postulate that consciousness=information (of a particular kind) it is far less obvious what the simulation lacks.

What is missing is structures that actually process information in a way that integrates information. A current computer cannot do this, and that is exactly why a simulation of it is a simulation. If you build a computer that is actually structured to process information properly, you are no longer simulating, you are recreating.

I am still trying to get to grips with the concept of Integrated Information, but it seems to require the idea of breaking the system into its parts. Now clearly, if you think of a computer system, breaking into parts will depend critically on what level you do it. If you break it into parts at the level of chips (or sections of chips) the IIT migt be low, but if you break it apart at the level of software structures, the parts would be very different and IIT would be higher. For example, in a linked list, two elements might be considered adjacent at the level of software, but be stored at widely different locations in the chips.

There is no integrated information at higher levels in a computer.


Exactly and the statement consciousness=IIT is a category error.

Good point. That is true for Integrated Information Theory within a classical ontology, and is why I think the von Neumann interpretation is needed along with admitting to experiential potential of the fundamental unified field.

How does panexperientialism fare better when it comes to the objection that electrons (and other fundamental particles) are QM-identical? I mean, you can't have a happy electron here and a sad electron over there, because the wave function of the joint system will only change sign (with no observable effects) if you swap them over!

David

Assigning attributes to electrons such as sad or happy is exactly some sort of panpsychism, where mental qualities are said to exist everywhere. A pure content-less experience is at least plausible for particles, whereas attributing mental qualities to particles is incoherent. But in the end, I don't think it makes sense considering some of the things I mentioned such as the problem of superposition and Special Relativity Theory.

However, I should point out the position of one of the Skeptiko guests, Alexander Wendt. He thinks that wavefunction collapse is experience, and this is what goes into his version of panpsychism. There are technical details required for this, for example he supports an objective collapse model of quantum theory. I would say that at least this is somewhat a form of panexperientialism rather than panpsychism, but even the "pan" prefix I do not find exactly accurate since this means experience isn't everywhere, even in an objective collapse model. But at least within the model itself, to say that experience occurs all over in non-living things, essentially it occurs with any significant interaction of particles, that the term panexperientialism at least makes sense. To attribute mental qualities (mental means of the mind) to these particle interactions is incoherent (this is not what Wendt is suggesting) unless you support Cartesian dualism. In my opinion, this is a specter still left over from Decarte, where western philosophy and science can't seem to get past anything related to consciousness or experience must be of the mind, and this mind is seen as some sort of non-material entity.
 
Assigning attributes to electrons such as sad or happy is exactly some sort of panpsychism, where mental qualities are said to exist everywhere. A pure content-less experience is at least plausible for particles, whereas attributing mental qualities to particles is incoherent.
I am not sure I get that distinction.
But in the end, I don't think it makes sense considering some of the things I mentioned such as the problem of superposition and Special Relativity Theory.

However, I should point out the position of one of the Skeptiko guests, Alexander Wendt. He thinks that wavefunction collapse is experience, and this is what goes into his version of panpsychism. There are technical details required for this, for example he supports an objective collapse model of quantum theory. I would say that at least this is somewhat a form of panexperientialism rather than panpsychism, but even the "pan" prefix I do not find exactly accurate since this means experience isn't everywhere, even in an objective collapse model. But at least within the model itself, to say that experience occurs all over in non-living things, essentially it occurs with any significant interaction of particles, that the term panexperientialism at least makes sense. To attribute mental qualities (mental means of the mind) to these particle interactions is incoherent (this is not what Wendt is suggesting) unless you support Cartesian dualism. In my opinion, this is a specter still left over from Decarte, where western philosophy and science can't seem to get past anything related to consciousness or experience must be of the mind, and this mind is seen as some sort of non-material entity.
Stapp has for many years, promoted the idea that consciousness imposes itself on matter via the frequency of wavefunction collapses - a high frequency of observation effectively locks a system in a given eigenstate. I notice that recently he has also embraced the simpler idea that consciousness might in certain circumstances bias the wavefunction collapse.

I think Dualism gets a bad press because it clearly can't be the ultimate theory. However, in other areas of physics people are not so picky - they happily use QM and GR, knowing they are incompatible. When you think about the sweep of physics over time, it is obvious that certain theories - such as Newtonian Gravity - have been hugely valuable, even though technically they are wrong. If Newton had invented GR, it would probably have flopped because nobody would have understood it! My feeling is that describing reality using Dualism makes a hell of a lot more sense than trying to fit it into the standard materialist model. It may well be that ultimately that Dualism resolves into Idealism, but Idealism is a much vaguer theory to cope with right now.
There is no integrated information at higher levels in a computer.
Why not? A software structure may contain a lot of information about an object, organised in a way that is useful in one way or another. Think of a program that reads the 10^6 photodiodes in the article, and uses it to find faces or specific astronomical objects, or whatever. It seems to me that you conclude that there is no IIT in a computer because you look at the wrong level - where data is stored in numbered memory addresses stored on chips.

David
 
I am not sure I get that distinction.

I mean that mind or mental qualities are not necessary for experience.

David Bailey said:
Stapp has for many years, promoted the idea that consciousness imposes itself on matter via the frequency of wavefunction collapses - a high frequency of observation effectively locks a system in a given eigenstate. I notice that recently he has also embraced the simpler idea that consciousness might in certain circumstances bias the wavefunction collapse.

He has not given up the probing actions that act via the quantum Zeno effect. The biasing of the probability distribution is something different and has proposed that it may be involved in pre sentiment.

David Bailey said:
I think Dualism gets a bad press because it clearly can't be the ultimate theory. However, in other areas of physics people are not so picky - they happily use QM and GR, knowing they are incompatible. When you think about the sweep of physics over time, it is obvious that certain theories - such as Newtonian Gravity - have been hugely valuable, even though technically they are wrong. If Newton had invented GR, it would probably have flopped because nobody would have understood it! My feeling is that describing reality using Dualism makes a hell of a lot more sense than trying to fit it into the standard materialist model. It may well be that ultimately that Dualism resolves into Idealism, but Idealism is a much vaguer theory to cope with right now.

Stapp's dualism isn't a substance dualism like Decarte. The dualism is between conscious experiences and indeterminate physical states.

David Bailey said:
Why not? A software structure may contain a lot of information about an object, organised in a way that is useful in one way or another. Think of a program that reads the 10^6 photodiodes in the article, and uses it to find faces or specific astronomical objects, or whatever. It seems to me that you conclude that there is no IIT in a computer because you look at the wrong level - where data is stored in numbered memory addresses stored on chips.

David

Because the information isn't integrated. Computers operate in a feed-forward style. Integrated Information Theory explicitly says that this cannot integrated information into a conscious experience.
 
I mean that mind or mental qualities are not necessary for experience.



He has not given up the probing actions that act via the quantum Zeno effect. The biasing of the probability distribution is something different and has proposed that it may be involved in pre sentiment.
Well he seems to leave the option open. I also think that presentiment may be fundamental. It might explain how consciousness 'knows' which QM button to press - follow the consequences of various options for a short time, and use that to make a decision.
Stapp's dualism isn't a substance dualism like Decarte. The dualism is between conscious experiences and indeterminate physical states.
Well I guess I was talking of Cartesian Dualism - nothing to do with Stapp - which seems no more discredited than the combination of QM and GR. Theories can be useful for a certain period of time in science, and even after they become superceded, they may still be useful inpractice - think of Newton's laws. My feeling is that if science embraced Cartesian Dualism for the time being, it might make enormous progress because it would have a theoretical framework in which to understand much of what gets discussed on this forum.
Because the information isn't integrated. Computers operate in a feed-forward style. Integrated Information Theory explicitly says that this cannot integrated information into a conscious experience.
There can be plenty of feedback loops at the software level - it seems to me that it depends at what level you choose to look for sub-components.

David
 
Well he seems to leave the option open. I also think that presentiment may be fundamental. It might explain how consciousness 'knows' which QM button to press - follow the consequences of various options for a short time, and use that to make a decision.

Well I guess I was talking of Cartesian Dualism - nothing to do with Stapp - which seems no more discredited than the combination of QM and GR. Theories can be useful for a certain period of time in science, and even after they become superceded, they may still be useful inpractice - think of Newton's laws. My feeling is that if science embraced Cartesian Dualism for the time being, it might make enormous progress because it would have a theoretical framework in which to understand much of what gets discussed on this forum.

There can be plenty of feedback loops at the software level - it seems to me that it depends at what level you choose to look for sub-components.

David

If science embraced dualism, i think it would really set us back, because that would maintain the classical ontology and prevent us from learning how any of it would work. In Cartesian Dualism, mind is entirely some separate substance not open to scientific investigation. If we embrace a quantum ontology such as the von Neumann interpretation we have a mathematical formalized theory of how mind and matter interact.

We will never understand consciousness until we get beyond classical mechanics, since classical mechanics excludes in-principle anything like consciousness. Classical mechanics is what is responsible for the utterly ridiculous things that have been said in philosophy of mind and science, like Dennett basically denying that conscious experience exists. To credit Dennett, this is true and logically entailed within the framework of classical mechanics. So what he said is true in that sense, but in the larger picture is is beyond ridiculous.

But regarding Integrated Information Theory, as defined, computers that we have today would have basically zero phi. The levels in IIT are about levels of spatiotemporal local maxima of integrated information, and although there is feedback, it is not occurring in a manner that produces integrated information.
 
Back
Top