Mod+ Limitations of Mechanistic Assumption [Resources]



Just to note JSTOR is offering online access to a section of their library (or possibly all?). In any case Heil's paper on memory traces, which Braude cites as a major inspiration, is available to read online for free:

Traces of Things Past
Dismantling the Memory Machine: A Philosophical Investigation of Machine Theories of Memory (Synthese Library)

An elaboration of the critiques in Heil and Braude**'s essays. You should be able to get the Kindle Edition free for 7 days which is what I did.

It's about 170 pages and does a good job of critiquing the mechanistic picture of a human being, as well as the idea that there's an implicit structure to the world that parts of the brain maintain some isomorphic relation to.

**Memory Without a Trace


From a paper I mentioned in the Information & Reality Resources thread:

In his book QED Feynman discusses the situation of photons being partially transmitted and partially reflected by a sheet of glass: reflection amounting to four percent. In other words one out of every 25 photons will be reflected on average, and this holds true even for a "one at a time" flux. The four percent cannot be explained by statistical differences of the photons (they are identical) nor by random variations in the glass. Something is "telling" every 25th photon on average that it should be reflected back instead of being transmitted. Other quantum experiments lead to similar paradoxes.T

Light dawns

Whether it was the ‘hand of God’ or some truly fundamental physical process that formed the constants, it is their apparent arbitrariness that drives physicists mad. Why these numbers? Couldn’t they have been different?

One way to deal with this disquieting sense of contingency is to confront it head-on. This path leads us to the anthropic principle, the philosophical idea that what we observe in the Universe must be compatible with the fact that we humans are here to observe it. A slightly different value for αwould change the Universe; for instance by making it impossible for stellar processes to produce carbon, meaning that our own carbon-based life would not exist. In short, the reason we see the values that we see is that, if they were very different, we wouldn’t be around to see them. QED. Such considerations have been used to limit α to between 1/170 and 1/80, since anything outside that range would rule out our own existence.

But these arguments also leave open the possibility that there are other universes in which the constants are different. And though it might be the case that those universes are inhospitable to intelligent observers, it’s still worth imagining what one would see if one were able to visit.
These possibilities are entertaining to think about – and they might well be real in adjacent universes. But there’s something very intriguing about how tightly constructed the laws of our own Universe appear to be. Leuchs points out that linking c to the quantum vacuum would show, remarkably, that quantum fluctuations are ‘subtly embedded’ in classical electromagnetism, even though electromagnetic theory preceded the discovery of the quantum realm by 35 years. The linkage would also be a shining example of how quantum effects influence the whole Universe.

And if there are multiple universes, unfolding according to different laws, using different constants, anthropic reasoning might well suffice to explain why we observe the particular regularities we find in our own world. In a sense it would just be the luck of the draw. But I’m not sure this would succeed in banishing mystery from the way things are.

Presumably the different parts of the multiverse would have to connect to one another in specific ways that follow their own laws – and presumably it would in turn be possible to imagine different ways for those universes to relate. Why should the multiverse work like this, and not that? Perhaps it isn’t possible for the intellect to overcome a sense of the arbitrariness of things. We are close here to the old philosophical riddle, of why there is something rather than nothing. That’s a mystery into which perhaps no light can penetrate.


The Strange Idea that Things Happen because They were Made to Happen:

'The talk will examine an embarrassment shared by both theological and scientific approaches to the intelligibility of the world and h ighlighted for theologians by Special Divine Action (SDA).

I will suggest that a serious, perhaps the central, problem presented by SDA is that of understanding a local event being brought about by an agency or force that is, by definition, absolutely general. The commonly expressed worry that SDA requires of God that he should violate His own laws reflects only the most obvious manifestation of what is a deeper difficulty; namely, finding an adequate explanation of the local, and actual, in the general.

The scientific endeavour to make the universe entirely intelligible - culminating in a putative Theory of Everything – encounters similar problems. I shall examine the Principle of Precedence in its various guises (inertia, laws of nature, probability) and different approaches to causation. They all prove profoundly unsatisfactory for different reasons. The difficulty common to various naturalistic responses to ‘Why’ is that of establishing an adequate connection between the explanandum and the explanation given that the former inevitably sets out general possibilities and the latter is composed of singular actualities.

The goal, or regulative idea, of science – namely finding a sufficient reason for singular events in the general properties of the universe to which they belong - is analogous to the theological aim of making sense of SDA by connecting and reconciling such action with fundamental characteristics of God. I shall argue that theists and atheists both need to look critically at the very idea that things happen because they are made to happen, typically by what has preceded it characterised in most general terms; at the notion of ‘becausation’.

In the final, and most speculative and least-developed, part of the paper, I shall ask whether the search for an explanation of events in something that makes them happen is prompted by a felt need to reconnect items of an intrinsically seamless universe pulled apart into distinct elements by the irruption of self-consciousness into Being. This last idea is offered up tentatively for dissection.'



More about Gregg Rosenberg from Steve Esser linking causality, consciousness, and time:

Rosenberg, Consciousness & Causality (Part 1)

"It has been somewhat a revelation to me this year to realize the degree to which causality had still posed such a philosophical challenge. We are led to believe that the type of physical theories we have are also good objective causal explanations, but they are not. In showing how the challenges of understanding consciousness and causality are linked and making a proposal for a unified solution, Rosenberg’s book should make it extremely difficult for the reader to consider either topic in isolation from the other going forward.

Below I give a chapter by chapter summary derived from my notes on the book; please note that I can’t claim to be doing justice to the actual arguments here. I will follow this post up with another one containing some concluding thoughts and outstanding questions."

Rosenberg, Consciousness & Causality (Part 2)

Q&A with Gregg Rosenberg

Marcus Arvan, author of A New Theory of Free Will, on the importance of Rosenberg's book A Place for Consciousness:

Anyway, here's why I think the book is underappreciated. Here's a first pass: it's a book that more or less singlehandedly caused me, a one-time staunch physicalist, to become a mind-body dualist. I know this is just a personal anecdote, but still, I think it is worth dwelling on for a moment. I started out my undergraduate career doing philosophy of mind at Tufts with Dan Dennett, one of the hardest-core physicalists out there. I was completely on board with him. Dualism had always seemed silly to me, and completely at odds with any scientifically respectable account of reality. And reading David Chalmers' book The Conscious Mind didn't sway me at all. The Zombie Argument -- the argument that Chalmers' entire book was based on -- immediately struck me then (just as it does now) as utterly question-begging. It seemed to me that will share Chalmers' intuition that zombies are conceivable, and so metaphysically possible, if one antecedently finds dualism attractive. Since I didn't find dualism attractive in the slightest, the Zombie Argument seemed silly to me.

Anyway, I more or less remained a physicalist...until I read Rosenberg's book. What was it about the book that did it for me? What was it that converted me into dualist? The first preface of Rosenberg's wonderful premise hits the nail on the head.


More about Gregg Rosenberg from Steve Esser linking causality, consciousness, and time:

Rosenberg, Consciousness & Causality (Part 1)

Rosenberg, Consciousness & Causality (Part 2)

Q&A with Gregg Rosenberg

Marcus Arvan, author of A New Theory of Free Will, on the importance of Rosenberg's book A Place for Consciousness:
Rosenberg put up a few chapters of his book, which are archived on New Dualism:

A Place for Consciousness

The Argument against Physicalism

The Boundary Problem for Experiencing Subjects

The Theory of Causal Significance

The Carrier Theory of Causation


Feser's posts crticizing the idea of "brute fact" explanations:
Similar thoughts from Lee Smolin:

Cosmological Natural Selection and the Principle of Precedence

"Physics is about discovering what the laws of nature are. And we’ve gone some distance towards that. But once you know what the laws of nature are, another question unfolds itself, which is why is it those the laws and not other laws?"

When we do experiments we prepare a system that we transform in some way and then we measure it and there’s some array of outcomes. And we expect that when we do that to a system in the future we’re going to get the same distribution of outcomes as we have in the past. If we’re studying in a quantum system those outcomes will be probabilistic. We’ll get a statistical distribution of outcomes.

But it doesn’t matter when that’s done. If we did it ten years ago and repeat the experiment now we’ll get the same distribution of outcomes. If we repeat it in ten years and, we believe, in a billion years or ten billion years, we’ll get the same distribution of outcomes. Why is that?

So the standard reason is that there are timeless laws of nature which somehow just exist outside of time in some transcendent sense. And they know when you’re doing an experiment that applies to them and they make sure that they govern the outcome. So why do we expect to get an outcome in the future the same as in the past? Because the same law of nature is acting.

At some point it occurred to me that’s a really bizarre idea, especially if you believe that there’s nothing that’s real outside of time.
Because what could a law of nature be that somehow lives outside of time, and isn’t affected by anything but comes in just when it’s needed and governs how things move and change.


Interview w/ Lee Smolin at Scientia Salon

Part of our view is that an aspect of moments, or events, is that they are generative of other moments. A moment is not a static thing, it is an aspect of a process (or visa versa) which generates new moments. The activity of time is a process by which present events bring forth or give rise to the next events.

I studied this idea together with Marina Cortes. We developed a mathematical model of causation from a thick present which we called energetic causal sets [6]. Our thought is that each moment or event may be a parent of future events. A present moment is one that has not yet exhausted or spent its capability to parent new events. There is a thick present of such events. Past events were those that exhausted their potential and so are no longer involved in the process of producing new events, they play no further role and therefore there is no reason to regard them as still existing. (So no to Ellis’s growing block universe.)
...there are several problems with extrapolating the laws that govern small subsystems to the universe as a whole. They are discussed in great detail in the books, but in brief:

  1. Those laws require initial conditions. Normally we vary the initial conditions to test hypotheses as to the laws. But in cosmology we must test simultaneously hypotheses as to the laws andhypotheses as to the initial conditions. This weakens the adequacy of both tests, and hence weakens the falsifiability of the theory.
  2. There is no possible explanation for the choice of laws, nor for the initial conditions, within the standard framework (which we call the Newtonian paradigm).
There are certainly common themes and influences in my work and those of Jaron Lanier and Roberto Mangabeira Unger. And I’m happy at times to indulge in some speculation about these influences. But these are very much to be distinguished from the science. The point is that I am happy to do the scientific work I can do now and trust for future generations to develop any implications for how we see ourselves in the universe. There is much serious, hard work to be done, and it will take a long time. Especially given the present confusions of actual science with the science fiction fantasies of many worlds and AI (these two ideas are expressions of the same intellectual pathology).


Precedence and freedom in quantum physics

A new interpretation of quantum mechanics is proposed according to which precedence, freedom and novelty play central roles. This is based on a modification of the postulates for quantum theory given by Masanes and Muller. We argue that quantum mechanics is uniquely characterized as the probabilistic theory in which individual systems have maximal freedom in their responses to experiment, given reasonable axioms for the behavior of probabilities in a physical theory. Thus, to the extent that quantum systems are free, in the sense of Conway and Kochen, there is a sense in which they are maximally free.

We also propose that laws of quantum evolution arise from a principle of precedence, according to which the outcome of a measurement on a quantum system is selected randomly from the ensemble of outcomes of previous instances of the same measurement on the same quantum system. This implies that dynamical laws for quantum systems can evolve as the universe evolves, because new precedents are generated by the formation of new entangled states.


The physicist Lee Smolin argues that the laws of nature are mutable, which leads us to ask what holds those laws up and accounts for their orderly change? Meta-laws? And those meta-laws in turn are contingent on meta-meta-laws...and so on...

Troublemaker Lee Smolin Says Physics–and Its Laws–Must Evolve

Horgan: Some leading physicists, such as Tegmark, Susskind, and Greene, espouse multiverse theories plus the anthropic principle as a kind of final framework for cosmology. Comment?

Smolin: This is a sleigh of hand by which they hope to convert an explanatory failure into an explanatory success. If we don’t understand the values the fundamental constants take in our universe, just presume our universe is a member of an infinite and unobservable ensemble of universes each with randomly chosen parameters. Our universe has the values it does because those make it hospitable to life.

There is so much wrong with this as a scientific hypothesis. As I have explained in detail in three of my books and several papers, it is hard to see how it could make any falsifiable predictions for doable experiments. Claims to the contrary are fallacious, as I and others have explained in detail. I won’t impose the details on your readers but just mention that these criticisms have not been answered.

What we have to do is to propose mechanisms by which the laws and constants may have evolved which imply falsifiable predictions by which they can be checked. I have proposed two: cosmological natural selection and the principle of precedence.

Horgan: You suggest above, and in your new book, that the laws of nature "evolve." Won't that hypothesis make physics and cosmology even more flexible and hence less falsifiable?

Smolin: No, the key lesson of cosmological natural selection (CNS) is that it makes falsifiable predictions for real observations. In fact the predictions I published in 1992 have held up. To mention one: there can be no neutron stars heavier than twice the mass of the sun. Current limits come close; the heaviest well-measured neutron star is at 1.9 solar masses, but so far none go over.

At first I had the same intuition your question expresses. But it's wrong; making laws evolve increases the falsifiability of science because it increases the number of hypotheses that can be checked because they imply falsifiable predictions. The reason is that the additional hypotheses concern the processes by which evolution took place. Since these processes would have taken place in the past they imply predictions which are checkable by real observations. This point is discussed in detail in my books Life of the Cosmos and Time Reborn.

One way to reconcile evolving laws with falsifiability is by paying attention to large hierarchies of time scales. The evolution of laws can be slow in present conditions, or only occur during extreme conditions that are infrequent. On much shorter time scales and far from extreme conditions, the laws can be assumed to be unchanging.

As Roberto Mangabeira Unger and I argue in our new book The Singular Universe, the most important discovery cosmologists have made is that the universe has a history. We argue this has to be extended to the laws themselves. Biology became science when the question switched from listing the species to the dynamical question of how species evolve. Fundamental physics and cosmology have to transform themselves from a search for timeless laws and symmetries to the investigation of hypotheses about how laws evolve.
The book Smolin mentions, The Singular Universe, is now offered for free on Unger's site. (discussion thread here)


In a critique of a piece by Pigliucci, Feser argues about the continued relevance & reality of teleology in some form, which invalidates the mechanistic picture of the universe.

Pigliucci says: “It makes no sense to ask what is the purpose or goal of an electron, a molecule, a planet or a mountain.” But the remark is either aimed at a straw man or begs the question. If by “the purpose of an electron etc.” Pigliucci has in mind something like the kind of purposes that a heart or an eyeball has (which can only be understood by reference to the flourishing of the organism of which these organs are parts), or the kind that an artifact has (which can only be understood by reference to the human purposes for which the artifact was made), then he is of course correct that electrons, molecules, planets and mountains lack such purposes. But not all teleology need in the first place involve the kinds of purposes we see in bodily organs and artifacts, and those who attribute teleology to inorganic phenomena are not attributing to them those specific kinds of teleology. What they have in mind instead is mere directedness toward an end.
It is also surprising that a philosopher of science like Pigliucci should overlook a famous example of purported teleology within physics, viz. least action principles. (See Hawthorne and Nolan’s paper “What Would Teleological Causation Be?” for a recent brief discussion by philosophers.) Of course, whether such principles ought really to be regarded as teleological is a matter of controversy, but that is irrelevant to the present point. What is relevant is, first, that if they are teleological, they would not have the kind of teleology that bodily organs and artifacts have. Hence they would be good examples of the more rudimentary, sub-organic kind of purported teleology that Pigliucci entirely overlooks. Second, the very fact that least action principles at least seem to many people to be teleological is another good illustration of how even physics is arguably teleonomic even if one were to concede to Pigliucci that it is not teleological. Once again, that would undermine Pigliucci’s attempt to explain teleonomy in terms of natural selection.

A further problem with Pigliucci’s remarks is that he supposes that a reference to natural selection suffices to show that teleology has been banished from biology. But that is not the case. As various thinkers with no ID theoretic or otherwise theological ax to grind (e.g. Marjorie Grene, Andre Ariew, J. Scott Turner) have pointed out, natural selection by itself only casts doubt on teleology where questions of adaptation are concerned. Whether some sort of teleology is necessary to make sense of developmental processeswithin an organism is another question. (Keep in mind that whether such teleology would require reference to some sort of designer is, contrary to what Pigliucci seems to suppose, a yet further question -- and one which would require settling the dispute between Platonic teleological realism, Aristotelian teleological realism, Scholastic teleological realism, and teleological reductionism.)
Finally, Pigliucci overlooks some obvious problems with his remarks about consciousness. By his own admission, apparently, phenomena that involve consciousness are irreducibly teleological and not merely teleonomic. So far so good; I think that is certainly true. But in that case it is quite silly to pretend (as Pigliucci rather glibly does) that explaining consciousness merely requires that cognitive science find its own Darwin. The way Darwin accounts for adaptation is precisely by arguing that it is not really teleological at all but merely teleonomic. Naturally, then, if consciousness is irreduciblyteleological, it is not even in principle going to be susceptible of that kind of reductionist or eliminativist explanation.

Of course, Pigliucci might respond that he didn’t mean to imply that consciousness would ever be explained in exactly the kind of manner Darwin employed, but only that it would require a scientist of Darwin’s stature to account for it. Fair enough, but even on this interpretation his remark is still much too glib. Darwin, and the other great names in modern science, are considered great largely because they are thought to have found ways to eliminate teleology from the phenomena they dealt with. In particular, they’ve treated teleology as a mere projection of the mind rather than a real feature of nature. Obviously you can’t apply that approach to conscious teleological processes without implicitly denying the existence of the thing you’re supposed to be explaining rather than actually explaining it. (And into the bargain, taking an incoherent position, since scientific theorizing, weighing evidence, etc. are themselves all teleological conscious processes.)

So, a “Darwin” of the science of consciousness would have to be asunlike Darwin, Newton, and Co. as they were unlike Aristotle. In particular, he’d have to reverse the anti-teleological trend of modern scientific theorizing. Or at any rate, he’d have to do so for allPigliucci has said, or all he plausibly could say given what he’s willing to concede vis-à-vis the centrality of genuine teleology (not just teleonomy) to the understanding of human phenomena.


Chomsky on Language Acquisition

Though he sees language as computational in some sense, I thought this admission of current scientific ignorance was an interesting portion of the interview
Thanks to PTEHA for this link:

Noam Chomsky on the unsolved mysteries of language and the brain

'Instead of seeking to show that the world is intelligible to us, the goals of science were implicitly lowered to construct theories that are intelligible to us.'

As an example of a theory explaining a non-intelligible world, Chomsky cites the recent confirmation of the existence of gravitational waves predicted by Einstein.

'That theory,' he says, 'is intelligible to us—but the conception on which it is based, of curved space-time, of quantum principles involved, for Galileo through Hume and Locke and so on, that would have been outside the framework of their science.

'They're intelligible, but the world isn't. It isn't a machine.'


The Machine Metaphor

Many of us aren’t aware how much the premises of the early architects of the “mechanistic program” (from The New Biology) still guide us today in our world views. In the beginning of the New Biology, the authors give a description of the machine. They write how Descartes separated the universe into two categories, mind and matter. Descartes wrote, “The laws of Mechanics…are identical with those of nature.” With the help of Newton, Kepler and others the world became imagined as a big clock, a big mechanism. In Leo Marx’s book, he mentions, the writer, Thomas Carlyle, who is talking here about the philosopher, John Locke: “His whole doctrine, is mechanical, in its aims and origin, in its method and its results.” Locke made the mind contingent on events outside of itself. Carlyle is saying that according to this metaphysic (my word), we no longer have will, imagination and creativity. We will be directed by outside forces. Here are the differences between a machine and an organism.

A machine has to be directed from without. An organism is directed from within. “No machine rebuilds its own parts. All organisms, however, constantly renew their tissues and cells right down to the molecules.”

“Another difference is that machines do not grow from seeds or eggs, but are composed of unchanging parts, assembled from the outside. Consequently, when a machine begins to function it already has all its parts. Not so the living thing. From its beginning it grows, not only in size but with an increasing differentiation of parts, organs, and new functions.”

Another difference: “machines have only a unity of order, not a unity of substance. A horse, through growth, determines its own shape and structure; consequently, its organs, tissues, and cells are identfiably horse organs, horse tissues, and horse cells. Every part of the living being, down to its macromolecules, bears the signature of its owner.” If you took a clock, you wouldn’t know the origin.

“Another fundamental difference between organism anmd machine is that organisms are natural things, whereas machines are artificial things….Artifacts are man-made and assembled from without. Organisms are made by nature and develop from within.

” A further difference is that the parts of a machine cab be completely separated, ten reassembled, so that the machine again runs normally. ” The New Biology Robert Augros and George Stancu (1987)

As Erich Fromm wrote “Man is a not a machine.”


Old post arguing for the necessity for a Prime Mover:

...I find that in contrast to the god of meaning, the unmoved god of movement may be an ontological necessity. The universe manifests itself as a set of discernible differences at our systemic boundary, which is to say, a vector of bits that our mind interprets as its current state. This current state contains the past as memories, and the future as possibilities, and possibly even ourselves as the structure of an interpretational system, and yet it does not suffice to account for any conscious experience. Consciousness is intrinsically a process, spanning more than one state. To notice that we are conscious, we must be able to access patterns of information that we may interpret as past world states, along with other patterns that we may interpret as different past world states, and in comparing them to experience that a progression of states takes place. A progression of states requires a principle that allows for transition from one state to the next. This is reflected in our definitions of computation, which are based on states that are ordered by a transition function.

In other words, the universe does not manifest itself as a giant pattern of bits, but as a succession of patterns, which means that something must progress from one pattern to the next. Some function, outside of the context of the state of the universe, must push the tensor network we tentatively call reality from one moment to the next.

The computations of the universe can in principle not be self-contained. If the universe contains mechanisms that allow it to compute, then something must act as its computational substrate that moves the computations of the universe along. There is of course nothing we can say about this computational substrate, except that it realizes a transition function capable of moving the universe from one state to the next. This outermost mechanism does not exist in space or time; rather, its state successions give rise to space and time, and all objects that appear to populate our animated universe. Compare to Aristotle...

...Without this Prime Mover, nothing in the universe could progress, computation would be impossible, we would not have minds, and hence could not have a conscious experience of a dynamic or even static universe. It may be tempting to conflate Aristotle's god: the Turing machine that runs the universe, with Aquinas' god: transcendental meaning and purpose, or even with a spiritualized god. But such an attempt seems to be epistemologically dishonest to me.


The whole man

My recent review of Michael Gazzaniga’s Who’s in Charge? Free Will and the Science of the Brain is now available online at the Claremont Review of Books website. And while you’re on the subject of philosophical anthropology, you might also take a look at William Carroll’s recent Public Discourse article “Who Am I? The Building of Bionic Man.”

I have discussed this reifying tendency in an earlier post here. I’ve commented on some of the erroneous claims about free will, perception, “mindreading,” etc. commonly made in the name of neuroscience here, here, and here. Biological reductionism is addressed in a couple of further earlier posts, here and here. And I discussed bionics here.
Last edited by a moderator:


Putnam and analytical Thomism, Part I

(the second part is about religion, will likely post it in a different relevant thread when it comes out)

...Now, while Putnam is sympathetic to the anti-reductionist bent of Aristotelianism, he too resists returning to an Aristotelian conception of nature. That brings us to his exchange with Haldane. Haldane’s essay has the memorable title “Realism with a metaphysical skull,” which is a play on the title of Putnam’s book Realism with a Human Face. Putnam wants to defend the reality of the everyday, commonsense world of human experience against reductionistic materialists who are willing to affirm the reality only of what can be described in the language of physical science. Haldane argues (correctly, in my view) that doing this successfully requires defending also some version of an Aristotelian conception of nature, which is what Putnam is unwilling to do. The “human face,” in Haldane’s view, requires an underlying “metaphysical skull” to hold it up.

One of the themes of Haldane’s article is that it is -- as Putnam himself emphasizes -- a mistake to try to explain the relationship between mind and world in terms of causal relations between inner mental representations (Lockean ideas, sentences in a “language of thought,” or what have you) and physical objects. One of the problems with this approach is that it opens up a gap between mind and world which can never be closed. Another is that it presupposes too narrow an understanding of causation. Modern representationalist-cum-causal theories of the mind confine themselves to what Aristotelians call efficient causation (and a desiccated notion of efficient causation at that). The right way to understand the relationship between mind and world, Haldane argues, is in terms of formal causation.

One application of this idea is that the right way to understand the relationship between a thought and its object is in terms of formal identity. When you judge that such-and-such an object is a triangle, what happens is that the mind takes on the very same form that the matter of which the triangle is composed has taken on. Precisely because there is, on the side of the thought, nothing material that has taken on this form, the thought is not itself a triangle (as any material thing that took on that form would be) but merely a thought about a triangle. But precisely because it has the very same form that the triangle has, the thought is a thought about a triangle rather than about something else. Again, thought and thing are formallyidentical, though not identical full stop. And because of this formalidentity there is no gap between mind and world that needs to be bridged in the efficient-causal terms that causal theories of thought appeal to...


Why science needs to break the spell of reductive materialism

Stuart Kauffman is professor emeritus at the University of Pennsylvania. He was educated in philosophy, psychology and physiology at Dartmouth and Oxford, and obtained his medical degree from UCSF in 1968. He is an affiliate professor at the Institute for Systems Biology in Seattle. His latest book is Humanity in a Creative Universe (2016).

We all sense something deeply deficient in our modern civilisation. Is it an absence of spirituality? Partly. A greedy materialism beyond what we really need? Yes, we are riding the tiger of late capitalism, where we make our living producing, selling and buying goods and services we often do not need on this finite planet. We cannot see ourselves, in part blinkered by unneeded scientism.

The central framework of current physics is that of entailing laws. The central image is the billiard table as boundary conditions and the set of all possible initial conditions of position and momenta of the balls on the table. Then, given Isaac Newton’s laws in differential form, we deduce the deterministic trajectories of the balls. Our model of how to do science is to deduce new consequences, test them, accept or reject the results by diverse criteria, then retain or modify our theories. Science proceeds as Aristotle might have wished, in part as deduction.

My aim is to begin to demolish this hegemony of reductive materialism and its grip on our scientific minds, and a far wider elicitation of a grossly misplaced scientism in modernity. Science is sciencia, knowledge. Being and becoming are more fundamental to all life and our humanity. We are, first of all, alive, and alive in a becoming biosphere. Despite bursts of extinction events and the fact that 99.9 per cent of all species that ever lived are gone, the biosphere flowers on. This flowering of the biosphere, more than a metaphor for human history, begins to suggest a mythic structure beyond that by which we currently live.
The Universe is 13.7 billion years old and has about 10^80 particles. The fastest time scale in the Universe is the Planck time scale of 10-43 seconds. If the Universe were doing nothing but using all 10^80 particles in parallel to make proteins the length of 200 amino acids, each in a single Planck moment, it would take 1039 repetitions of the history of the Universe to make all the possible proteins the length of 200 amino acids just once! Now consider CHNOPS and all the molecular species with 100,000 CHNOPS atoms. We have no idea how vastly many repetitions of the history of the Universe it would take to make them all.

What I have just said is, I think, of the deepest importance. As we consider proteins the length of 200 amino acids and all possible CHNOPS molecules with 100,000 atoms or fewer per molecule, it is obvious that the Universe will never make them all. History enters when the space of what is possible is vastly larger than what can actually happen.

A next point is simple and clear: consider all the CHNOPS molecules that can be made with one, with two, with three, with four, with n, with 100,000 atoms per molecule. The space of possible molecules grows rapidly with the number of atoms per molecule. Call the space of possible molecules with n atoms of CHNOPS the phase space for CHNOPS molecules of n atoms. That phase space increases enormously as n increases. Consequently, in the lifetime of the Universe, as n increases, that phase space will be sampled ever more sparsely. The Universe will make all CHNOPS molecules with two atoms, but not all with 100,000.

We have no way to study this exploration deterministically. Here in the heart of classical physics, reductive materialism can fail. Sciencia fails, reason fails, and doors open to how we live forward. We start to be set free as humans in a creative universe. We co-create with one another and with nature, but by the very creativity of the Universe and us in it, we cannot know what we will co-create.

Then what can guide us? Our guide can be a new founding mythic structure that reflects our full enlivenment: humanity in a creative universe, biosphere and human individual, and social lives that are fully lived and that keep becoming. The dream is diversity, more ways of being human as our 30 or so civilisations across the globe weave together gently enough to honour their roots and allow change to unfold gracefully. Our global woven civilisation is ours to create, ever-unknowing, facing, as Immanuel Kant said, the crooked timber of our humanity


Dismantling the Memory Machine: A Philosophical Investigation of Machine Theories of Memory (Synthese Library)

An elaboration of the critiques in Heil and Braude**'s essays. You should be able to get the Kindle Edition free for 7 days which is what I did.

It's about 170 pages and does a good job of critiquing the mechanistic picture of a human being, as well as the idea that there's an implicit structure to the world that parts of the brain maintain some isomorphic relation to.

**Memory Without a Trace
Here's Raymond Tallis' argument against memories being stored in the brain -> A Smile At Waterloo Station:

...Neurophilosophers will not be impressed by my objection. The difference between the shock-chastened sea snail and my feeling sad over a meeting that passed so quickly, is simply the difference between 20,000 neurons or a hundred billion; or, more importantly, between the modest number of connexions within Aplysia’s nervous system, and the unimaginably large number of connexions in your brain (said to be of the order of a 100 trillion). Well, I don’t believe that the difference between Kandel’s ‘memory in a dish’ and my actual memory is just a matter of the size of the nervous system or the number or complexity of the neurons in it. Clarifying this difference will enable us to see what is truly mysterious in memory...

...Making present something that is past as something past, that is to say, absent, hardly looks like a job that a piece of matter could perform, even a complex electrochemical process in a piece of matter such as a brain. But we need to specify more clearly why not. Material objects are what they are, not what they have been, any more than they are what they will be. Thus a changed synaptic connexion is its present state; it is not also the causes of its present state. Nor is the connection ‘about’ that which caused its changed state or its increased propensity to fire in response to cues. Even less is it about those causes located at a temporal distance from its present state. A paper published in Science last year by Itzhak Fried claiming to solve the problem of memory actually underlines this point. The author found that the same neurons were active in the same way when an individual remembered a scene (actually from The Simpsons) as when they watched it.

So how did people ever imagine that a ‘cerebral deposit’ (to use Henri Bergson’s sardonic phrase) could be about that which caused its altered state? Isn’t it because they smuggled consciousness into their idea of the relationship between the altered synapse and that which caused the alteration, so that they could then imagine that the one could be ‘about’ the other? Once you allow that, then the present state ofanything can be a sign of the past events that brought about its present state, and the past can be present. For example, a broken cup can signify to me (a conscious being when I last checked) the unfortunate event that resulted in its unhappy state.

Of course, smuggling in consciousness like this is inadmissible, because the synapses are supposed to supply the consciousness that reaches back in time to the causes of the synapses’ present states. And there is another, more profound reason why the cerebral deposit does not deliver what some neurophysiologists want it to, which goes right to the heart of the nature of the material world and the physicist’s account of its reality – something that this article has been circling round. I am referring to the mystery of tensed time; the mystery of an explicit past, future and present.

That remembered smile is located in the past, so my memory is aware that it reaches across time. In the mind-independent physical world, no event is intrinsically past, present or future: it becomes so only with reference to a conscious, indeed self-conscious, being, who provides the reference point – the now which makes some events past, others future, and yet others present. The temporal depth created by memories, which hold open the distance between that which is here and now and that which is no longer, is a product of consciousness, and is not to be found in the material world. As Einstein wrote in a moving letter at the end of his life, “People like me who believe in physics know that the distinction between past, present and future is only a stubbornly persistent illusion.” I assume that those who think of memory as a material state of a material object – as a cerebral deposit – also believe in physics – in which case they cannot believe that tensed time exists in the brain, or more specifically, in synapses. A material object such as the brain may have a history that results in its being altered, but the previous state, the fact of alteration, or the time interval between the two states, are not present in the altered state. A synapse, like a broken cup, does not contain its previous state, the event that resulted in its being changed, the fact that it has changed, the elapsed time, or anything else containing the sense of its ‘pastness’ which would be necessary if it were the very material of memory. How could someone ever come to believe it could?


From Paul Davies' Universe From Bit, essay in Information and the Nature of Reality: From Physics to Metaphysics.:

"...The orthodox view of the nature of the laws of physics contains a long list of tacitly assumed properties. The laws are regarded, for example, as immutable, eternal, infinitely precise mathematical relationships that transcend the physical universe, and were imprinted on it at the moment of its birth from “outside,” like a maker’s mark, and have remained unchanging ever since… In addition, it is assumed that the physical world is affected by the laws, but the laws are completely impervious to what happens in the universe… It is not hard to discover where this picture of physical laws comes from: it is inherited directly from monotheism, which asserts that a rational being designed the universe according to a set of perfect laws. And the asymmetry between immutable laws and contingent states mirrors the asymmetry between God and nature: the universe depends utterly on God for its existence, whereas God’s existence does not depend on the universe…

Clearly, then, the orthodox concept of laws of physics derives directly from theology. It is remarkable that this view has remained largely unchallenged after 300 years of secular science. Indeed, the “theological model” of the laws of physics is so ingrained in scientific thinking that it is taken for granted. The hidden assumptions behind the concept of physical laws, and their theological provenance, are simply ignored by almost all except historians of science and theologians. From the scientific standpoint, however, this uncritical acceptance of the theological model of laws leaves a lot to be desired…"