Against Australian Zombies

I've hearf smolin several times but I don't know if I understand his position well enough to comment, and who really knows anyway.

I do think that presentiment experiments indicate time is different than space. Telepathy seems to work across any distance, but presentiment gets worse over longer time intervals....Does that make time more fundamental like Smolin thinks? I don't know.

I'm trying to find the quote, but I recall him once saying there was no place for the present moment in physics. I believe Einstein made similar claims.

But if we accept the Present as real, then Stapp/VN gives us way to bring the Now (and to some extent consciousness) into physics.
 
I'm trying to find the quote, but I recall him once saying there was no place for the present moment in physics. I believe Einstein made similar claims.

But if we accept the Present as real, then Stapp/VN gives us way to bring the Now (and to some extent consciousness) into physics.

Oh I see. I think the present exists and explains the flow of time. The conception of a block universe only works with the VNI in my opinion.
 
Oh I see. I think the present exists and explains the flow of time. The conception of a block universe only works with the VNI in my opinion.

Can you explain this? Stapp/VN seems to be a rejection of the Block Universe?

Or, at the least, it would be as Arvan suggests in the Peer-to-Peer Hypothesis - the Multiverse is like a set of possible future tracks but the collective consciousness of each individual ["summed" together] sets the reality. It's actually a rather interesting solution to the measurement problem (IIRC).

edit: Here's what he says in A Unified Explanation of Quantum Phenomena? The Case for the Peer-to-Peer Simulation Hypothesis as an Interdisciplinary Research Program:

A P2P Simulation just is:

a. A superposition of different representational states, such that

b.
Any particular measurement within the simulation will result in a determinate measured location on any individual computer taking a measurement.

But this is, functionally speaking, precisely what quantum superposition and wave-function collapse are in our world. Objects in our world exist in superposition, except that whenever they are measured, the measurement will result in a single determinate value. The P2P Hypothesis thus explains quantum superposition and wave-function collapse.
 
Last edited by a moderator:
Can you explain this? Stapp/VN seems to be a rejection of the Block Universe?

Or, at the least, it would be as Arvan suggests in the Peer-to-Peer Hypothesis - the Multiverse is like a set of possible future tracks but the collective consciousness of each individual ["summed" together] sets the reality. It's actually a rather interesting solution to the measurement problem (IIRC).

edit: Here's what he says in A Unified Explanation of Quantum Phenomena? The Case for the Peer-to-Peer Simulation Hypothesis as an Interdisciplinary Research Program:


There is a section of Mindful Universe that talks about this that I can track down for you when I get home later. But yeah, it goes against the block universe. He uses Whiteheads ontology within a relativistic quantum field theory. It is really elegant!
 
There is a section of Mindful Universe that talks about this that I can track down for you when I get home later. But yeah, it goes against the block universe. He uses Whiteheads ontology within a relativistic quantum field theory. It is really elegant!

Oh no worries I have the book and know which section you're talking about.

I was thinking the whole thing is really elegant and a good form of "Liberal Naturalism" in that once you have consciousness as among the fundamentals you can explain a lot of puzzling things about reality.

Naturally elegance isn't necessarily an indicator of truth...but since none of the interpretations has knock out proof it's at least worth further investigation. Not to mention there are at least two interpretations that seem really poor (though I may not fully understand them):

Objective Collapse - if something "just happens" that seems like an absence of explanation.

Multiverse - Smolin's critique is succinct but biting:

'The notion that our universe is part of a vast or infinite multiverse is popular—and understandably so, because it is based on a methodological error that is easy to fall into. Our current theories can work at the level of the universe only if our universe is a subsystem of a larger system.

So we invent a fictional environment and fill it with other universes. This cannot lead to any real scientific progress, because we cannot confirm or falsify any hypothesis about universes causally disconnected from our own.'

-Time Reborn
 
On the observer fixing reality, Smolin seems to be going this route as well? Though "observer" here doesn't seem to mean a conscious entity, which I believe is Penrose's take as well?

Precedence and freedom in quantum physics

Principle of precedence: When a quantum process terminating in a measurement has many precedents, which are instances where an identically prepared system was subject to the same measurement in the past, the outcome of the present measurement is determined by picking randomly from the ensemble of precedents of that measurement. We give now a brief sketch of this novel interpretation of quantum mechanics, by giving more precise statements of these postulates.

One might however, still ask how a system knows what its precedents are? This is like asking how an elementary particle knows which laws of nature apply to it. The postulate that general timeless laws act on systems as they evolve in time requires a certain set of metaphysical presuppositions. The hypothesis given here, that instead systems evolve by copying the responses of precedents in their past, requires a different set of metaphysical presuppositions. Either set of presuppositions can appear strange, or natural, depending on one’s metaphysical preconceptions. The only scientific question is which sets of metaphysical preconceptions lead to hypotheses which are confirmed by experiment

Finally, some readers will ask whether there are any implications for whether human beings or animals have freedom to make choices not completely determined by the past. This might arise by the generation of novel entangled states in neural processes. Of course, allowing the possibility for novelty is not sufficient. What would be necessary to realize the idea would be to discover that the outcomes of neural processes are influenced by quantum dynamics of large molecules with entangled states, so that the lack of determinism of quantum processes is reflected in human choices and actions. This could very easily fail to be the case. Resolving this kind of question remains a goal for the distant future.

It seems one of the other quantum-consciousness theories could potentially filling the role here? I know there's one hypothesis that a certain species of bird utilizes entanglement for navigation?

I'd be curious to see if Smolin has read Rosenberg's arguments for consciousness as the carrier of causality, because maybe we can just bypass some of this worry regarding entangled states in the brain?
 
As an addendum to the above, here are three papers:

Stapp's QUANTUM THEORY AND THE ROLE OF MIND IN NATURE

Morhroff's criticism of Stapp's paper

Stapp's reply

All to say Stapp's arguments aren't free from criticism (even from an apparent Idealist like Morhroff), but nevertheless I think we should continue to keep them in the set of living hypotheses as they aren't completely defeated either.

AFAICTell this is the latest debate with Stapp on the QZE:

Mind Efforts, Quantum Zeno Effect and Environmental Decoherence

In this article we present the mathematical formalism of Quantum Zeno Effect (QZE) and explain how the QZE arises from suppression of coherent evolution due to external strong probing action. Then we prove that the model advocated by Henry Stapp, in which mind efforts are able to exert QZE upon brain states, is not robust against environmental decoherence if the mind is supposed not to violate the Born rule and to act only locally at the brain. The only way for Stapp to patch his proposal would require postulation of mind efforts acting globally on the brain and the entangled environment, which would be regression to a theory predicting paranormal Psi effects. Our arguments, taken together with the lack of scientifically valid confirmation of paranormal Psi effects such as telekinesis, imply that Stapp’s model does not have the potential to assist neuroscientists in resolving the mind-brain puzzle.

Reply to a Critic: Mind Efforts, Quantum Zeno Effect and Environmental Decoherence

The original Copenhagen interpretation of quantum mechanics was offered as a pragmatic methodology for making predictions about future experiences on the basis of knowledge gleaned from past experiences. It was, therefore, fundamentally about mental realities, and refrained from speaking about a more inclusive reality. Von Neumann created, later, what is called the orthodox formulation of quantum mechanics. It incorporates all of the Copenhagen-based predictions about connections between experiences into a rationally coherent conception of a dynamically integrated psychophysical reality. Von Neumann’s formulation allows the same laws and concepts that are used to make predictions about atomic phenomena to account for the capacity of our mental intentions to influence our bodily actions. Danko Georgiev claims to have found logical flaws in my use of von Neumann’s theory to explain this causal effectiveness of our mental intentions. The bulk of Georgiev’s paper gives a detailed discussion of a system with just two base states, up and down. This is not an adequate model of the pertinent physical system, a human brain. Georgiev’s attempt to relate his two-state work to the case at hand is flawed by statements such as “…in von Neumann’s formulation there are no such things as minds, spirits, ghosts, or souls, …” Von Neumann’s formulation certainly does involve minds. Georgiev’s choice of words seems designed to suggest that I am introducing mental qualities and assumptions that go beyond what are already parts of von Neumann’s theory. I explain here why these allegations, and all his other allegations of errors, are incorrect.
 
Haven’t been posting much but I have continued to think and read up on all this. I’ve even started trying my hand at translating the math, though I’m not ready to post on that yet!


I’ve been really trying to figure out these Von Neumann chains, what processes 1 and 2 are supposed to actually represent, and to understand Stapp’s take on it. Stapp certainly doesn’t make it easy since he refers to what VN wrote but often uses different terms - including framing the math differently. I don’t mind him putting his own spin on things, I just wish he’d clearly indicate when he’s representing VN’s position vs. his own extrapolations!


Anyway, getting back to it.


Neil: when we’ve talked about VN chains, I got the sense that you are envisioning a chain with potentially ever expanding links, potential intervening events, so long as it ends up in an observation. For example, I think you and Paul were talking about a hypothetical scenario where different elements are added after the measurement, for example, ringing a bell (or not ringing a bell) which is perceived (or not perceived) by the observer leading to a collapse of the wavefunction.


Please correct me if I’m wrong on that. In any event, I’ll set out how I interpret it so far:


The way I’m reading it, the chain as VN sets it out the chain really doesn’t go beyond I (the system being measured), II (the measuring apparatus) and III (the observer).


VN describes how his chain can be expanded. Paraphrasing, we can just say we observed x. We can go further and say that we took in wavelengths of light through our eyes and then observed. Or further going into the chemical processes involved and then observed. And he hypotheses that perhaps later discoveries would allow us to break down the process even more, and each time we have to end with: “and is observed by the observer.”


But notice that when he expands the chain he’s not really adding new events in addition to I, II, and III. When VN expands the chain, what he is setting out is the nested way we can describe the process of observing. In other words: the length of the "chain" doesn't vary, only the level of detail in which we describe it. It’s a string of nested descriptions – but each are going on every time we observe – there are no added steps. Note that I don’t think VN himself used the term “chain” (which does imply, I think that links can be added) – I think it’s a bit of a misnomer.


I think part of the problem is that he frames it as ending with “and is observed by the observer”. This suggests that the observation takes place at the end of the chain. But taking an IIT approach I think there is overlap: at one point the observation and the process described in the chain become the same. That is: the wavelengths of light must go to our eyes before the observation happens, some processing may go on before we have the conscious observation but at some point the process being described has the phenomenological property that we call observation. In other words: the chain can be described as: information flows from the measuring apparatus to the eye, into the brain, certain electrico-chemical processes take place that results in the information being integrated into the system. That integration of information has the phenomelogical property of a conscious observation. The observation is a different side of information processing coin: but I don’t think we can say, in this model, that the observation is a final step (or if we do, it’s a shared final step).

I’ve got to read it a bit closer, but if I understand correctly, VN describes the observation of the measurement as something that has to happen right at the time the measurement takes place, otherwise the wave function will continue to evolve and we will lose that information (I’m still working on deciphering this bit though, so I may be off here). Recall, that he’s writing at a time where the results of measurement were not recorded by the measuring apparatus. You had to look at the pointer at that time, you had to look at the thermometer at that time.

This highlights the point that we should be clear on what we are observing. We are NOT observing the object being measured (ie: the particle). As VN explains, the reason we need to combine I (the system being observed- S), II (the measuring apparatus – M) and III (the observing system) is that we can’t observe I on its own. We can only do it in conjunction with II (S + M). That is what Process 1 is describing, as I understand it – the intersection of those two systems.


Ok, have to break here – will continue later.
 
I think part of the problem is that he frames it as ending with “and is observed by the observer”. This suggests that the observation takes place at the end of the chain. But taking an IIT approach I think there is overlap: at one point the observation and the process described in the chain become the same. That is: the wavelengths of light must go to our eyes before the observation happens, some processing may go on before we have the conscious observation but at some point the process being described has the phenomenological property that we call observation. In other words: the chain can be described as: information flows from the measuring apparatus to the eye, into the brain, certain electrico-chemical processes take place that results in the information being integrated into the system. That integration of information has the phenomelogical property of a conscious observation. The observation is a different side of information processing coin: but I don’t think we can say, in this model, that the observation is a final step (or if we do, it’s a shared final step).

Well we know one of the authors, Christof Koch, is a panpsychist in that he believes there's potential for conscious experience in all matter.

So the "Bing!" moment of IIT that would collapse the wave function would then be a conscious experience. From a Neutral Monist or Panpsychic point of view that would still suggest the collapse is due to consciousness?

OTOH, there does seem to be a potential problem in that the bits of brain serving as carriers of information would be in superposition when you move the Heisenberg Cut up to the highest possible level?
 
Criticism of Decoherence:

Our Non-Deterministic Reality Is Neither Digital Nor Analog: Experimental Tests Can Show That Decoherence Fails to Resolve the Measurement Problem

Is reality best described in digital or analog terms? In proper context, we are asking: what type of math is best for that purpose? However, I argue that our universe is genuinely non-deterministic, as conventional notions of quantum mechanics imply. Since mathematics is by nature deterministic, reality is not fully describable by any true mathematical model. The best answer to the original question is then, “neither – reality transcends mathematics.” It is argued that some popular attempts to avoid the quantum measurement problem, such as the decoherence interpretation, are flawed. The logical case for DI is marred by the circular argument at its core. More importantly: some experiments are described, which could falsify the DI. If successful, they would show that we can recover superpositions supposedly lost to decoherence. Hence, our finding definitive experimental outcomes instead of superposed results is not due to the effects of decoherence. Those definite, exclusionary results show a genuinely indeterminate character of the universe.
 
Back
Top