Symbiotic Existential

Cosmology

 Full Colour PDF 30 mb White pages print version

Chris King

CC BY-NC-ND 4.0  doi:10.13140/RG.2.2.32891.23846
Part 2 Conscious Cosmos
Update 5-8-2021 3-2024

 

dhushara.com

 

Contents Summary - Contents in Full

 

Dedication

The Core

Symbiotic Existential Cosmology:

            Scientific OverviewDiscovery and Philosophy

Biocrisis, Resplendence and Planetary Reflowering

Psychedelics in the Brain and Mind, Therapy and Quantum ChangeThe Devil's Keyboard

Fractal, Panpsychic and Symbiotic Cosmologies, Cosmological Symbiosis

Quantum Reality and the Conscious Brain

The Cosmological Problem of Consciousness in the Quantum Universe

The Physical Viewpoint, The Neuroscience Perspective

The Evolutionary Landscape of Symbiotic Existential Cosmology

Evolutionary Origins of Conscious Experience

Science, Religion and Gene Culture Co-evolution

Animistic, Eastern and Western Traditions and Entheogenic Use

Natty Dread and Planetary Redemption

Yeshua’s Tragic Mission, Revelation and Cosmic Annihilation

Ecocrisis, Sexual Reunion and the Entheogenic Traditions

Song cycleVideo 

Communique to the World To save the diversity of life from mass extinction

The Vision Quest to Discover Symbiotic Existential Cosmology

The Great I AM, Cosmic Consciousness, Conscious Life and the Core Model of Physics

The Evolution of Symbiotic Existential Cosmology

Resplendence

A Moksha Epiphany

   Epilogue   

            References

  Appendix:Primal Foundations of Subjectivity, Varieties of Panpsychic Philosophy

 

 

Consciousness is eternal, life is immortal.

Incarnate existence is Paradise on the Cosmic equator

in space-time the living consummation of all worlds.

But mortally coiled! As transient as the winds of fate!

 

 

 

Symbiotic Existential Cosmology – Contents in Full

 

Dedication

The Core

A Scientific Overview

Biogenic

Panpsychic

Symbiotic

Discovery and Philosophy

The Existential Condition and the Physical Universe

Turning Copernicus Inside Out

Discovering Life, the Universe and Everything

The Central Enigma: What IS the Conscious Mind?, Glossary

Biocrisis and Resplendence: Planetary Reflowering

The Full Scope: Climate Crisis, Mass Extinction. Population and Nuclear Holocaust

Psychedelics in the Brain and Mind

Therapy and Quantum Change: The Results from Scientific Studies

The Devil's Keyboard

Biocosmology, Panpsychism and Symbiotic Cosmology

Fractal Biocosmology

Darwinian Cosmological Panpsychism

Cosmological Symbiosis

Symbiosis and its Cosmological Significance

Quantum Reality and the Conscious Brain

The Cosmological Problem of Consciousness, The Central Thesis, The Primal Axiom

The Physical Viewpoint, Quantum Transactions

The Neuroscience Perspective, Field Theories of Consciousness

Conscious Mind, Resonant Brain

Cartesian Theatres and Virtual Machines

Global Neuronal Workspace, Epiphenomenalism & Free Will

Consciousness and Surviving in the Wild

Consciousness as Integrated Information

Is Consciousness just Free Energy on Markov Landscapes?

Can Teleological Thermodynamics Solve the Hard Problem?, Quasi-particle Materialism

Panpsychism and its Critics

The Crack between Subjective Consciousness and Objective Brain Function

A Cosmological Comparison with ChalmersConscious Mind

Minimalist Physicalism and Scale Free Consciousness

Defence of the real world from the Case Against Reality

Consciousness and the Quantum: Putting it all Back Together

How the Mind and Brain Influence One Another

The Diverse States of Subjective Consciousness

Consciousness as a Quantum Climax

TOEs, Space-time, Timelessness and Conscious Agency

Psychedelics and the Fermi Paradox

The Evolutionary Landscape of Symbiotic Existential Cosmology

Evolutionary Origins of Neuronal Excitability, Neurotransmitters, Brains and Conscious Experience

The Extended Evolutionary Synthesis, Deep and dreaming sleep

The Evolving Human Genotype: Developmental Evolution and Viral Symbiosis

The Evolving Human Phenotype: Sexual and Brain Evolution, the Heritage of Sexual Love and Patriarchal Dominion

Gene Culture Coevolution

The Emergence of Language

Niche Construction, Habitat Destruction and the Anthropocene

Democratic Capitalism, Commerce and Company Law

Science, Religion and Gene-Culture Coevolution, The Spiritual Brain, Religion v Nature, Creationism

The Noosphere, Symbiosis and the Omega Point

Animism, Religion, Sacrament and Cosmology

Is Polyphasic Consciousness Necessary for Global Survival?

The Grim Ecological Reckoning of History

Anthropological Assumptions and Coexistential Realities

Shipibo: Split Creations and World Trees

Meso-American Animism and the Huichol

The Kami of Japanese Shinto

Maori Maatauranga

Pygmy Cultures and Animistic Forest Symbiosis

San Bushmen as Founding Animists

The Key to Our Future Buried in the Past

Entasis and Ecstasis: Complementarity between Shamanistic and Meditative Approaches to Illumination

Eastern Spiritual Cosmologies and Psychotropic Use

Psychedelic Agents in Indigenous American Cultures

Natty Dread and Planetary Redemption

The Scope of the Crisis

A Cross-Cultural Perspective

Forcing the Kingdom of God

The Messiah of Light and Dark

The Dionysian Heritage

The Women of Galilee and the Daughters of Jerusalem

Whom do Men say that I Am?

Descent into Hades and Harrowing Hell

Balaam the Lame: Talmudic Entries

Soma and Sangre: No Redemption without Blood

The False Dawn of the Prophesied Kingdom

Transcending the Bacchae: Revelation and Cosmic Annihilation

The Human Messianic Tradition

Ecocrisis, Sexual Reunion and the Tree of Life

Biocrisis and the Patriarchal Imperative

The Origins and Redemption of Religion in the Weltanshauung

A Millennial World Vigil for the Tree of Life

Redemption of Soma and Sangre in the Sap and the Dew

Maria Sabinas Holy Table and Gordon Wassons Pentecost

The Man in the Buckskin Suit

Santo Daime and the Union Vegetale

The Society of Friends and Non-sacramental Mystical Experience

The Vision Quest to Discover Symbiotic Existential Cosmology

The Three Faces of Cosmology

Taking the Planetary Pulse

Planetary Reflowering

Scepticism, Belief and Consciousness

Psychedelics The Edge of Chaos Climax of Consciousness

Discovering Cosmological Symbiosis

A Visionary Journey

The Great I AM, Cosmic Consciousness, Conscious Life and the Core Model of Physics

Evolution of Symbiotic Existential Cosmology

Crisis and Resplendence

Communique on Preserving the Diversity of Life on Earth for our Survival as a Species

Affirmations: How to Reflower the Diversity of Life for our own Survival

Entheogenic Conclusion

A Moksha Epiphany

Epilogue

Symbiotic Existential Cosmology is Pandora's Pithos Reopened and Shekhinah's Sparks Returning

The Weltanshauung of Immortality

Paradoxical Asymmetric Complementarity, The Natural Face of Samadhi vs Male Spiritual Purity, Clarifying Cosmic Karma

Empiricism, the Scientific Method, Spirituality and the Subjective Pursuit of Knowledge

The Manifestation Test

References

Appendix Primal Foundations of Subjectivity, Varieties of Panpsychic Philosophy

 

 

The Conscious Brain, and the Cosmological Universe[24]

Solving  the Central Enigma of Existential Cosmology

Chris King – 21-6-2021

In memory of Maria Sabina and Gordon Wasson

 

Contents

 

1 The Cosmological Problem of Consciousness

2 Psychedelic Agents in Indigenous American Cultures

3 Psychedelics in the Brain and Mind

4 Therapy and Quantum Change: Scientific Results

5 Evolutionary Origins of Excitability, Neurotransmitters and Conscious Experience

6 The Evolutionary Landscape of  Symbiotic Existential Cosmology

7 Fractal Biocosmology, Darwinian Cosmological Panpsychism and Symbiotic Cosmology

8 Animistic, Eastern and Western Traditions and Entheogenic Use

9 Natty Dread and Planetary Redemption

10 Biocrisis and Resplendence: Planetary Reflowering,  A Moksha Epiphany

 

Abstract:

 

This article resolves the central enigma of existential cosmology – the nature and role of subjective experience – thus providing a direct solution to the "hard problem of consciousness". This solves, in a single coherent cosmological description, the core existential questions surrounding the role of the biota in the universe, the underlying process supporting subjective consciousness and the meaning and purpose of conscious existence. This process has pivotal importance for avoiding humanity causing a mass extinction of biodiversity and possibly our own demise, instead becoming able to fulfil our responsibilities as guardians of the unfolding of sentient consciousness on evolutionary and cosmological time scales.

 

The article overviews cultural traditions and current research into psychedelics [25] and formulates a panpsychic cosmology, in which the mind at large complements the physical universe, resolving the hard problem of consciousness extended to subjective conscious volition over the universe and the central enigmas of existential cosmology, and eschatology, in a symbiotic cosmological model. The symbiotic cosmology is driven by the fractal non-linearities of the symmetry-broken quantum forces of nature, subsequently turned into a massively parallel quantum computer by biological evolution (Darwin 1859, 1889). Like Darwin’s insights, this triple cosmological description is qualitative rather than quantitative, but nevertheless accurate. Proceeding from fractal biocosmology and panpsychic cosmology , through edge of chaos dynamical instability, the excitable cell and then the eucaryote symbiosis create a two-stage process, in which the biota capture a coherent encapsulated form of panpsychism, which is selected for, because it aids survival. This becomes sentient in eucaryotes due to excitable membrane sensitivity to quantum modes and eucaryote adaptive complexity. Founding single-celled eucaryotes already possessed the genetic ingredients of excitable neurodynamics, including G-protein linked receptors and a diverse array of neurotransmitters, as social signalling molecules ensuring survival of the collective organism. The brain conserves these survival modes, so that it becomes an intimately-coupled society of neurons communicating synaptically via the same neurotransmitters, modulating key survival dynamics of the multicellular organism, and forming the most complex, coherent dynamical structures in the physical universe.

 

This results in consciousness as we know it, shaped by evolution for the genetic survival of the organism. In our brains, this becomes the existential dilemma of ego in a tribally-evolved human society, evoked in core resting state networks, such as the default mode network, also described in the research as "secondary consciousness", in turn precipitating the biodiversity and climate crises. However, because the key neurotransmitters are simple, modified amino acids, the biosphere will inevitably produce molecules modifying the conscious dynamics, exemplified in the biospheric entheogens, in such a way as to decouple the ego and enable existential return to the "primary consciousness" of the mind at large, placing the entheogens as conscious equivalents of the LHC in physics. Thus a biological symbiosis between Homo sapiens and the entheogenic species enables a cosmological symbiosis between the physical universe and the mind at large, resolving the climate and biodiversity crises long term in both a biological and a psychic symbiosis, ensuring planetary survival.

 

The Decline of Ground-Breaking Disruptive Scientific Discoveries

 

The research of Park, Leahey & Funk (2022) confirms that papers and patents are becoming less disruptive over time. I want to draw the attention of readers to the fallacy that the past record of science and technology is a basis to believe pure physicalist science will show how the brain “makes” consciousness in any sense greater than the neural correlate of conscious experience. This needs to be taken seriously and is damning evidence against the assumption that the past progress of mechanistic science will solve the hard problem of conscious volition.

 

Fig 70b: Decline of disruptive science and technology

 

The figure shows just how devastating the decline has become and indicates the extreme unlikelihood of mechanistic science solving the biggest problem of all. This belief is a product of severe ignorance of the diffuse complexity of the excitation from the prefrontals through to the motor cortex modified by the basal ganglia and the cerebellum, involving both diffuse network activity and deep cyclic connections, which appear to be both uncomputable and empirically undecidable in the quantum universe.

 

Research Citation Profile

 

Fig 70c: The research citation profile of Symbolic Existential Cosmology

 

Growth of research and distribution of dates of citations two years since the mushroom trip that precipitated this work, it has accrued 782 pages, with 1503 source references, with a peak of 101 in 2022 and 82 in 2023. Of these 1096 are from 2000 on, 839 from 2010 on and 328 from 2020 on, illustrating the real-time up-to-date nature of the work, which is roughly in four categories, (1) cosmological physics, (2) consciousness and neuroscience, (3) evolutionary biology, (4) metaphysics, animism and religious studies. Fittingly, the oldest citation is Charles Darwin (1859) "On the Origin of the Species".

 

The Central Thesis of Symbiotic Existential Cosmology © Chris King 25-10-2023 as part of SEC 1.1.416

  

The central thesis of Symbiotic Existential Cosmology is that subjective consciousness interacts with the physical brain,  as a sub-quantum anticipator of environmental uncertainty, using space-time – entangled patterns of brain wave states, which are in a quantum phase transition between superimposed wave function evolution and measurement-derived wave function collapse, biologically coupled to a complementary self-critically tuned neurodynamic phase transition, in which edge-of-chaos instabilities in conscious brain dynamics, accompanied by wave phase-modulation, implicitly involve both past and future special-relativistic states in their entangled wave functions, thus facilitating efficacy of subjective conscious volition over the physical universe, by anticipating existential threats to organismic survival, a process which is thus preserved by evolution, in a universe that is also in a phase transition between full entanglement and wave collapse  – a process which is itself in a state of biospheric evolutionary climax at the edge of chaos.

 

This resolves two complementary questions essential to a complete existential cosmology:

 

(a)   The neurobiological question of the hard problem of the subjectivity of consciousness, extended to volition, in which subjective conscious physical volition is manifested through space-time – entangled patterns of brain wave states, in a quantum phase transition, coupled to self-organised phase transitions  at the edge-of-chaos, complemented by tuned modulation of wave phase.

(b)  The cosmological question of the origin and foundation of conscious existence, manifesting in evolutionary climax, in a quantum universe, which is likewise in an ongoing partially collapsed dynamic phase transition, of quantumness (a) between a fully quantum-entangled cosmic wave function and a measurement-reduced universe, substantially quantum-collapsed, due to a plethora of destructive measurement processes that tend to evoke "classical" outcomes, through the projection operator, as in the cat paradox experiment, in which planetary biospheres and conscious existence, co-arise in evolutionary climax.

 

Empirical experience confirms that, while the existence of the universe is necessary to our biological survival, subjective conscious experience is primary. We informatively access the physical world, entirely and exclusively through our subjective conscious experiences of it, so subjective consciousness is clearly primary to our knowledge of the physical universe, whose physical quanta in bosons and fermions, we do not experience directly, but only inferentially, through our subjective biologically derived sensory perceptions, along with our inner less-confined subjective experiences in dreams and visionary states. Nevertheless, we understand that, as living organisms, the physical universe is necessary to our biological survival and an inferred foundation of our common existence.

 

Recall that, in a classical deterministic Laplacian universe, every future state is fully determined by the dynamical laws of evolution and the initial conditions, while in the quantum universe, our knowledge of outcomes is restricted to evolution of the Schrödinger equation, punctuated by collapse of the wave function, resulting in discrete events having a probability determined by the wave function amplitude. This leads to so-called non-local hidden-variable theories, postulated below the quantum level, from pilot waves, through quantum transactions to super-determinism, which seek to explain and determine the specific individual outcomes we see, for example in the Schrödinger cat paradox experiment, where a cat subjected to a lethal dose of cyanide, via a radio-active decay is found to be alive or dead but not both, e.g. with a 50-50 probability. The quantum universe is thus in a state of probability-punctuated causality and is neither fully defined, nor causally closed. We attempt to understand this using two classical ideas, (1) the deterministic nature of billiard ball collisions and (2) the probabilities of a classical poker game, applying the first to the Schrödinger evolution and decoherence and the second to the probability interpretation of wave function collapse, but neither are adequate to deal with the non-locality implied by quantum entanglement. God does not play dice!

 

The central thesis of Symbiotic Existential Cosmology asserts that subjective consciousness is a key process resolving cosmological indeterminacy, by forming a neurodynamic realisation of these hidden variable interactions governing non-local entanglement, facilitating biological survival by anticipating quantum-derived environmental uncertainty in a manner complementing computational boundary conditions. Space-time anticipation constitutes whole conscious experiences containing a retro-causal echo of the future, both in intuitive conscious anticipation and in rarer more graphic experiences of prescience, precognition and deja vu. This is not just a deductive process. It cannot be analysed, or predicted by causal computation, for the same reason entanglement prevents transmission of classical information under local Einsteinian causality, and the uncertainties are irreducible, because they involve both computationally intractable environmental uncertainties and indeterminacies due to other live conscious volitional agents.

 

The central thesis states that the conscious brain uses its continuous wave tissue potentials to evoke a unique form of quantum space-time anticipation through global-scale entangled excitations moving in and out of coherence, in a phase transition between full entanglement and wave function collapse that is reflected biodynamically in (a) the phase transition between edge of chaos dynamics at tipping points between chaos and order, and (b) phase coupling between continuous electromagnetic wave potentials in the tissues and individual neuronal action potentials in the neural net connectivity. It is thus not just a quantum super-computer, but a sub-quantum conscious anticipator.

 

This results in subjective experience manifesting as a globally omnipresent "stream of consciousness" everything, everywhere all at once (Gillett 2023), forming an anticipatory “quantum of the present”, utilising memory and the environmental context, but also entangled with the future, in a manner similar to an integral transform, also supported by extensive predictive coding in the brain, acting as a contextual environmental filter, to enable intuitive conscious volitional choices, favouring the survival of the whole organism, by anticipating acute existential threats, crises and opportunities and in turn applying subjective conscious volition in our decision-making and behaviour, to facilitate our biological survival over evolutionary time scales.  By contrast, rational processes are tied more to established classical causal factors compatible with cerebral cognition. The subjective interaction can thus move seamlessly from uncertainty-dominant anticipative intuition, to a context-dominant predictively cognitive response. Because brain dynamics do not involve independent identically-distributed IID measurements, as their contexts are continually changing, there is no evidence that they converge to the classical. Nor can it be established that the quantum universe, or biological brain, is causally closed to subjectivity.

 

This overall picture coincides with our empirical subjective experience of conscious intuitive decision-making, resulting in veridical perception of our volitional action in intentional behaviour. For this purpose, SEC asserts that there are two scientific empirical avenues, (a) verified objective observation and (b) affirmed subjective experience, consistent with the etymology and current meaning of "empirical" as "based on, concerned with, or verifiable by observation, or experience, rather than theory or pure logic" (Oxford Languages). Given the fact that the overwhelming opinion of sane people is that we have conscious empirical intentional ability to affect our fates and this is the basis of criminal and corporate law on intent, this has a very high level of empirical validation, which even scientific physicalists, in all honesty, need to concede is the case. Indeed SEC is the only type of cosmology that respects our empirical experience without invoking some form of illusionism that our subjective experience of active agency is a fallacious deception.

 

Symbiotic Existential Cosmology asserts that subjective conscious intuition in decision-making and volition can influence the ongoing brain state, in situations where there is quantum-derived environmental uncertainty, as is critical for our survival, and that the brain state accompanying subjective conscious volition over the physical universe, occurs only when the physical brain dynamic is in a self-organised critical state of physical phase transition at the edge of chaos, between a more chaotic and a more ordered regime, which is itself at a quantum uncertain unstable tipping point. This avoids causal conflict between subjective volition and a determined ongoing physical brain state. It is achieved by minimising quantum measurement in favour of a higher degree of anticipatory phase entanglement than the surrounding universe, consistent with minimal collapse to the classical. The efficacy of conscious anticipation in survival in the wild is essential to explain why subjective conscious volition has an evolutionary advantage, over non-conscious computational processing and has thus been conserved by evolution, giving conscious biological brains a critical advantage over AI, using phase transitional entanglement that is not accessible to objective physical interrogation.

 

While Symbiotic Existential Cosmology is agnostic to specific quantum interpretations, it finds the approach of transactional super-causality helpful as an explanation of quantum anticipation, through a trans-causal future-past handshaking phase transition, from a plasma-like interaction of contingent offer and confirmation waves, to a solid-like set of real interactions between emitters and absorbers, which, like subjective experience, stands outside physical space-time.

 

Furthermore, this is achieved because the brain also uses phase modulation of its waveforms, evidenced in electro- and magneto-encephalograms, through hand-shaking phase coupling of individual action potentials with the continuous tissue potentials. Because brain waves in the EEG share the features of macroscopic radio waves, they share features of entanglement of their fields that we witness in coherent laser light as well. The modulated phase coherence of global brain dynamics in dynamic feedback with more decentralised partially decoherent local processing, invokes the "spotlight" of coherent conscious attention, with a sub-conscious periphery, common to theories such as global neuronal workspace theory. Symbiotic Existential Cosmology asserts that these phase modulations are not just analogous to the foundation concept of quantum uncertainty through wave beats, but constitute a unique form of quantum measurement by the brain of its own internal dynamical states, distinct from using the Born probability interpretation by destructive measurement, as in standard experimental physics, such as the cat paradox experiment, while keeping destructive measurement minimal to facilitate quantum anticipation.

 

In turn, this enables the subjectively conscious brain to collapse the probability multiverse of future quantum uncertainties, to unfold the line of emerging world history that we invoke together, for better or worse, through our choices in the uncertain “karmic” quantum environment, as live intentional behavioural autonomous agents, thus inheriting personal conscious responsibility for our actions, particularly in relation to the biosphere and it’s and our survival as a species.

 

The Cosmological Axiom of Primal Subjectivity

We put this into precise formulation, taking into account that the existence of primary subjectivity is an undecidable proposition, from the physical point of view,  in the sense of Godel, but is empirically certain from the experiential point of view, we come to the following:

 

(1) We start on home ground, i.e. with human conscious volition, where we can clearly confirm both aspects of reality – subjectively experiential and objectively physical.

(2) We then affirm as empirical experience, that we have efficacy of subjective conscious volition over the physical universe, manifest in every intentional act we make, as is necessary for our behavioural survival – as evidenced by my consciously typing this passage, and that this is in manifest conflict with pure physicalism asserting the contrary.

(3) We now apply Occam's razor, not just on parsimony, but categorical inability of pure materialism, using only physical processes, which can only be empirically observed, to deal with subjective consciousness, because this can only be empirically experienced and is private to observation. This leads to intractability of the hard problem of consciousness. Extended to the physicalist blanket denial of conscious physical volition, which we perceive veridically in our conscious perception of our enacted intent, this becomes the extended hard problem. Classical neuroscience accepts consciousness only as an epiphenomenon – an internal model of reality constructed by the brain, but denies volition, as a delusion perpetrated by evolution to evoke the spectre of intentional behaviour.

(4) We then scrutinise the physical aspect and realise we cannot empirically confirm classical causal closure the universe in brain dynamics because: (a) the dynamics is fractal to the quantum-molecular level so non-IID processes don't necessarily converge to the classical and (b) experimental verification is impossible because we would need essentially to trace the neurodynamics of every neuron, or a very good statistical sample, when the relevant dynamics is at the unstable edge of chaos and so is quantum sensitive. Neither can we prove consciousness causes brain states leading to volition, because consciousness can only be experienced and not observed, so it’s a genuine undecidable proposition physically.

(5) This sets up the status of: “Does subjective conscious volition have efficacy over the universe? ” to be an empirically undecidable cosmological proposition from the physical perspective, in the sense of Godel. From the experiential perspective however, it is an empirical certainty.

(6) We therefore add a single minimal cosmological axiom, to state the affirmative proposition – “Subjective conscious volition has efficacy over the physical universe”. We also need to bear in mind that a physicalist could make the counter proposition that it doesn’t, and both could in principle be explored, like the continuum hypothesis in mathematics – that there is no infinite cardinality between those of the countable rationals and uncountable reals [1].

(7) Rescaling the primal axiom: We now need to scale this axiom all the way down to the quantum level, because it is a cosmological axiom that means that the universe has some form of primal subjective volition, so we need to investigate its possible forms. The only way we can do this, as we do with one another about human consciousness, where we can’t directly experience one another’s consciousness, is to make deductions from the physical effects of volition – in humans, organisms, amoebo-flagellates, prokaryotes, biogenesis, butterfly effect systems and quanta.

(8) We immediately find that quantum reality has two complementary processes:

(a) Quantum consciousness: The wild wave function which contains both past and future implicit “information” under special relativity, corresponding to the quantum-physical experiential interface of primal subjectivity.

(b) Quantum volition: Collapse of the wave function, which violates causality and in which the normalised wave power space leaves the quantum total free will where to be measured, which is the quantum-physical volitional interface of primal subjectivity.

(9) Primal cosmological subjectivity: This means that subjectivity is a primal complement to the quantum universe and that the cosmos as a whole in an interactive process between physical objectivity and experiential subjectivity, arising at the cosmic origin.

(9) Two potentially valid cosmologies from the physical perspective, but only one from the experiential perspective:

As with any undecidable proposition, from the objective perspective, pure physicalists can, on the one hand, continue to contend that the quantum has no consciousness or free will and that uncertainty is “random” and cite lack of an obvious bias violating the Born interpretation, and develop that approach, thus claiming volition is a self-fulfilling delusion of our internal model of reality.  But Symbiotic Existential Cosmology can validly argue that uncertainty could be due to a complex quasi-random process, e.g. a special relativistic transactional collapse process, which by default, the quantum, by virtue of its wave function context does have “conscious” free will over, allowing us and the diversity of life to also be subjectively conscious and affect the world around us, unlike the pure materialist model

(10) Symbiotic Existential Cosmology, thus shows that CA, in the form of subjective conscious volition, is undecidable physically, although it is certain experientially and necessary for subjective conscious survival..

 

An Accolade to Cathy Reason

 

The first part of the answer to the cosmological axiom CA – that subjective consciousness is a cosmological complement to the physical universe – was due to Cathy Reason. In 2016 she proved that it is impossible to establish certainty of consciousness through a physical process. So CA could be false, or it could be unprovable. In 2019, and 2021, with Kushal Shah, she proved the no-supervenience theorem – that the operation of self-certainty of consciousness is inconsistent with the properties possible in any meaningful definition of a physical system – effectively showing CA is certain experientially. A formal proof is Reason (2023).

 

1 The Cosmological Problem of Consciousness

 

The human existential condition consists of a complementary paradox. To survive in the world at large, we have to accept the external reality of the physical universe, but we gain our entire knowledge of the very existence of the physical universe through our conscious experiences, which are entirely subjective and are complemented by other experiences in dreams and visions which also sometimes have the genuine reality value we describe as veridical. The universe is thus in a fundamental sense a description of our consensual subjective experiences of it, experienced from birth to death, entirely and only through the relentless unfolding spectre of subjective consciousness.

  


Fig 71: (a) Cosmic evolution of the universe (WMAP King 2020b). Life has existed on Earth for a third of the universe’s 13.7 b ya lifetime. (b) Symmetry-breaking of a unified superforce  into the four wave-particle forces of nature, colour, weak, electromagnetic and gravity with the first three forming the standard model and with the weak-field limit of general relativity (Wilczek 2015) comprising the core model. (c) quantum uncertainty defined through wave coherence beats, (d) Schrödinger cat experiment. Schrödinger famously said The total number of minds in the universe is one”, preconceiving Huxley’s notion of the  mind at large used as this monograph’s basis for cosmological symbiosis. Quantum theory says the cat is in both live and dead states with probability 1/2 but the observer finds the cat alive or dead, suggesting the conscious observer collapses the superimposed wave function. (e) Feynman diagrams in special relativistic quantum field theories involve both retarded (usual) and advanced (time backwards) solutions because the Lorenz energy transformations ensuring the atom bomb works have positive and negative energy solutions . Thus electron scattering (iv) is the same as positron creation-annihilation [26]Each successive order Feynman diagram has a contribution reduced by a factor   the fine structure constant. (f) Double slit interference shows a photon emitted as a particle passes through both slits as a wave before being absorbed on the photographic plate as a particle. The trajectory for an individual particle is quantum uncertain but the statistical distribution confirms the particles have passed through the slits as waves. (g) Cosmology of conscious mental states (King 2021a). Kitten’s Cradle a love song.

 

The Physical Viewpoint

 

The religious anthropocentric view of the universe was overthrown, when Copernicus, in 1543 deduced that the Earth instead of being in the centre of the cosmos instead, along with the  other solar system planets, rotated in orbits around the Sun. Galileo defended heliocentrism based on his astronomical observations of 1609. By 1615, Galileo's writings on heliocentrism had been submitted to the Roman Inquisition which concluded that heliocentrism was foolish, absurd, and heretical since it contradicted Holy Scripture. He was tried by the Inquisition, found "vehemently suspect of heresy", and forced to recant. He spent the rest of his life under house arrest.

 

The Copernican revolution in turn resulted in the rise of classical materialism defined by Isaac Newton’s laws of motion (1642 – 1726), after watching the apple fall under gravity, despite Newton himself being a devout Arian Christian who used scripture to predict the apocalypse. The classically causal Newtonian world view, and Pierre Simon Laplace’s (1749 – 1827) view of mathematical determinism that if the current state of the world were known with precision, it could be computed for any time in the future or the past, came to define the universe as a classical mechanism in the ensuing waves of scientific discovery in classical physics, chemistry and molecular biology, climaxing with the decoding of the human genome, validating the much more ancient atomic theory of Democritus (c. 460 – c.370 BC). The classically causal universe of Newton and Laplace has since been fundamentally compromised by the discovery of quantum uncertainty and its spooky" features of quantum entanglement. 

 

In counterposition to materialism, George Berkeley (1685 – 1753) is famous for his philosophical position of "immaterialism", which denies the existence of material substance and instead contends that familiar objects like tables and chairs are ideas perceived by our minds and, as a result, cannot exist without being perceived. Berkeley argued against Isaac Newton's doctrine of absolute space, time and motion in a precursor to the views of Mach and Einstein. Interest in Berkeley's work increased after 1945 because he had tackled many of the issues of paramount interest to 20th century philosophy, such as perception and language.

 

The core reason for the incredible technological success of science is not the assumption of macroscopic causality, but the fact that the quantum particles come in two kinds. The integral spin particles, called bosons, such as photons, can all cohere together, as in a laser and thus make forces and radiation, but the half-integer spin particles, called fermions, such as protons and electrons, which can only congregate in pairs of complementary spin, are incompressible and thus form matter, inducing a universal fractal complexity, via the non-linearity of the electromagnetic and other quantum forces. The fermionic quantum structures are small, discrete and divisible, so the material world can be analysed in great detail. Given the quantum universe and the fact that brain processes are highly uncertain, given changing contexts and unstable tipping points at the edge of chaos, objective science has no evidential basis to claim the brain is causally closed and thus falsely conclude that we therefore have no agency to apply our subjective and consciousness to affect the physical world around us. By agency here I mean full subjective conscious volition, not just objective causal functionality (Brizio & Tirassa 2016, Moreno & Mossio 2015), or even autopoiesis (Maturana & Varela 1972).

 

The nature of conscious experience remains the most challenging enigma in the scientific description of reality, to the extent that we not only do not have a credible theory of how this comes about but we don’t even have an idea of what shape or form such a theory might take. While physical cosmology is an objective quest, leading to theories of grand unification, in which symmetry-breaking of a common super-force led to the four forces of nature in a big-bang origin of the universe, accompanied by an inflationary beginning, the nature of conscious experience is entirely subjective, so the foundations of objective replication do not apply. Yet for every person alive today, subjective conscious experiences constitute the totality of all our experience of reality, and physical reality of the world around us is established through subjective consciousness, as a consensual experience of conscious participants.

 

Erwin Schrödinger: Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental.

 

Arthur Eddington: The stuff of the world is mind stuff.

 

J. B. S. Haldane: We do not find obvious evidence of life or mind in so-called inert matter...; but if the scientific point of view is correct, we shall ultimately find them, at least in rudimentary form, all through the universe.

 

Julian Huxley: Mind or something of the nature as mind must exist throughout the entire universe. This is, I believe, the truth.

 

Freeman Dyson: Mind is already inherent in every electron, and the processes of human consciousness differ only in degree and not in kind from the processes of choice between quantum states which we call chancewhen they are made by electrons.

 

David Bohm: It is implied that, in some sense, a rudimentary consciousness is present even at the level of particle physics.

 

Max Planck: “I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness.

Werner Heisenberg: Is it utterly absurd to seek behind the ordering structures of this world a consciousnesswhose intentionswere these very structures?

 

Andrei Linde: Will it not turn out, with the further development of science, that the study of the universe and the study of consciousness will be inseparably linked, and that ultimate progress in the one will be impossible without progress in the other?

 

The hard problem of consciousness (Chalmers 1995) is the problem of explaining why and how we have phenomenal first-person subjective experiences sometimes called “qualia” that feel "like something”, and more than this, evoke the entire panoply of all our experiences of the world around us. Chalmers comments (201) Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” By comparison, we assume there are no such experiences for inanimate things such as a computer, or a sophisticated form of artificial intelligence. Two extensions of the hard problem are the hard problem extended to volition and the hard manifestation problem how is experience manifested in waking perception, dreams and entheogenic visions?

 

Fig 71b: The hard problem's explanatory gap – an uncrossable abyss.

 

Although there have been significant strides in both electrodynamic (EEG and MEG), chemodynamic (fMRI) and connectome imaging of active conscious brain states, we still have no idea of how such collective brain states evoke the subjective experience of consciousness to form the internal model of reality we call the conscious mind, or for that matter volitional will. In Jerry Fodor’s words: “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious.”

 

Nevertheless opinions about the hard problem and whether consciousness has any role in either perception or decision-making remain controversial and unresolved. The hard problem is contrasted with easy, functionally definable problems, such as explaining how the brain integrates information, categorises and discriminates environmental stimuli, or focuses attention. Subjective experience does not seem to fit this explanatory model. Reductionist materialists, who are common in the brain sciences, particularly in the light of the purely computational world views induced by artificial intelligence, see consciousness and the hard problem as issues to be eliminated by solving the easy problems. Daniel Dennett (2005) for example argues that, on reflection, consciousness is functionally definable and hence can be corralled into the objective description. Arguments against the reductionist position often cite that there is an explanatory gap (Levine 1983) between the physical and the phenomenal. This is also linked to the conceivability argument, whether one can conceive of a micro-physical “zombie” version of a human that is identical except that it lacks conscious experiences. This, according to most philosophers (Howell & Alter 2009), indicates that physicalism, which holds that consciousness is itself a physical phenomenon with solely physical properties, is false.

 

David Chalmers (1995), speaking in terms of the hard problem, comments: “The only form of interactionist dualism that has seemed even remotely tenable in the contemporary picture is one that exploits certain properties of quantum mechanics.” He then goes on to cite (a) David Eccles’ (1986) citing of consciousness providing the extra information required to deal with quantum uncertainty thus not interrupting causally deterministic processes, if they occur, in brain processing and (b) the possible involvement of consciousness in “collapse of the wave function” in quantum measurement.  We next discuss both of these loopholes in the causal deterministic description.

 

Two threads in our cosmological description indicate how the complementary subjective and objective perspectives on reality might be unified. Firstly, the measurement problem in the quantum universe, appears to involve interaction with a conscious observer. While the quantum description involves an overlapping superposition of wave functions, the Schrödinger cat paradox, fig 71(d), shows that when we submit a cat in a box to a quantum measurement, leading to a 50% probability of a particle detection smashing a flask of cyanide, killing the cat, when the conscious observer opens the box, they do not find a superposition of live and dead cats, but one cat, either stone dead or very alive. This leads to the idea that subjective consciousness plays a critical role in collapsing the superimposed wave functions into a single component, as noted by John von Neumann, who stated that collapse could occur at any point between the precipitating quantum event and the conscious observer, and others (Greenstein 1988, Stapp 1995, 2007).

 

Wigner & Margenau (1967) used a variant of the cat paradox to argue for conscious involvement. In this version, we have a box containing a conscious friend who reports the result later, leading to a paradox about when the collapse occurs – i.e when the friend observes it or when Wigner does. Wigner discounted the observer being in a superposition themselves as this would be preceded by being in a state of effective “suspended animation”. As this paradox does not occur if the friend is a non-conscious mechanistic computer, it suggests consciousness is pivotal. Henry Stapp (2009) in "Mind, Matter and Quantum Mechanics" has an overview of the more standard theories.

 

While systems as large as 2000 atoms (Fein et al. 2019) that of gramicidin A1, a linear antibiotic polypeptide composed of 15 amino acids (Shayeghi et al. 2020), and even a deep-frozen tardigrade (Lee at al. 2021) have been found in a superposition of states resulting in interference fringes, indicating that the human body or brain could be represented as a quantum superposition, it is unclear that subjective experience can. More recent experiments involving two interconnected Wigners’ friend laboratories also suggest the quantum description "cannot consistently describe the use of itself” (Frauchiger & Renner 2018). An experimental realisation (Proietti et al. 2019) implies that there is no such thing as objective reality, as quantum mechanics allows two observers to experience different, conflicting realities. These paradoxes underly the veridical fact that conscious observers make and experience a single course of history, while the physical universe of quantum mechanics is a multiverse of probability worlds, as in Everett’s many worlds description, if collapse does not occur. This postulates split observers, each unaware of the existence of the other, but what kind of universe they are then looking at seems inexorably split into multiverses, which we do not experience.

 

In this context Barrett (1999) presents a variety of possible solutions involving many worlds and many or one mind and in the words of Saunders (2001) in review has resonance with existential cosmology:

 

Barretts tentatively favoured solution [is] the one also developed by Squires (1990). It is a one-world dualistic theory, with the usual double-standard of all the mentalistic approaches: whilst the physics is precisely described in mathematical terms, although it concerns nothing that we ever actually observe, the mental in the Squires-Barrett case a single collective mentality is imprecisely described in non-mathematical terms, despite the fact that it contains everything under empirical control.

 

In quantum entanglement, two or more particles can be prepared within the same wave function. For example, in a laser, an existing wave function can capture more and more photons in phase with a standing wave between two mirrors by stimulated emission from the excited medium. In other experiments pairs of particles can be generated inside a single wave function. For example an excited Calcium atom with two outer electrons can emit a blue and a yellow photon with complementary polarisations in a spin-0 to spin-0 transition, as shown in fig 72(8). In this situation when we sample the polarisation of one photon, the other instantaneously has the complementary polarisation even when the two detections take place without there being time for any information to pass between the detectors at the speed of light. John Bell (1964) proved that the results predicted by standard quantum mechanics when the two detectors were set at varying angles violated the constraints defined by local Einsteinian causality, implying quantum non-locality, decried by Einstein, Rosen and Podolsky (1935) as incomplete:

 

In a complete theory there is an element corresponding to each element of reality. A sufFicient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either (1) that the description of reality as given by a wave function in quantum mechanics is not complete, or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a system on the basis of measurements made on another system that had previously interacted with it leads to the result that if  (1) is false then (2) is also false. One is thus led to conclude one precludes the knowledge of the other. Then either (1) that the description of reality as given by a wave function is not complete.

 

The experimental verification was confirmed by Alain Aspect and others (1982) over space-like intervals using rapidly time varying analysers (fig 72(8)), receiving a Nobel in 2022. There are other more complex forms of entanglement such as the W and GHZ states (Greenberger, Horne & Zeilinger 1989, Mermin 1990), used in quantum computing (Coecke et al. 2021), types of entangled state that involve at least three subsystems (particle states, or qubits). Extremely non-classical properties of the GWZ state have been observed.

 

Albert Einstein dubbed the phenomenon "spooky action at a distance" and proposed that the effect actually came about because the particles contained hidden variables, or instructions, which had already predetermined their statesThis doesn't mean that quantum mechanics is incomplete, superficial or wrong , but that a hidden variable theory we do not have direct access to within uncertainty may provide the complete description.

 

Fig 71c: Cancellation of off-diagonal entangled components in decoherence by damping,
modelling extraneous collisions (
Zurek 2003).

 

Other notions of collapse  (see King 2020b for details) involve interaction with third-party quanta and the world on classical scales. All forms of quantum entanglement (Aspect et al. 1982), or its broader phase generalisation, quantum discord (Ollivier & Zurek 2002) involve decoherence (Zurek 1991, 2003), because the system has become coupled to other wave-particles. But these just correspond to further entanglements, not collapse. Recoherence (Bouchard et al. 2015) can reverse decoherence, supporting the notion that all non-conscious physical structures can exist in superpositions. Another notion is quantum darwinism (Zurek 2009), in which some states survive because they are especially robust in the face of decoherence. Spontaneous collapse (Ghirardi, Rimini, & Weber 1986) has a similar artificiality to Zurek’s original decoherence model, in that both include an extra factor in the Schrödinger equations forcing collapse.

 

Other notions of collapse  (see King 2020b for details) involve interaction with third-party quanta and the world on classical scales. All forms of quantum entanglement (Aspect et al. 1982), or its broader phase generalisation, quantum discord (Ollivier & Zurek 2002) involve decoherence (Zurek 1991, 2003), because the system has become coupled to other wave-particles. But these just correspond to further entanglements, not collapse. Recoherence (Bouchard et al. 2015) can reverse decoherence, supporting the notion that all non-conscious physical structures can exist in superpositions.  Another notion is quantum darwinism (Zurek 2009), in which some states survive because they are especially robust in the face of decoherence.

 

Penrose's objective-collapse theory, postulates the existence of an objective threshold governing the collapse of quantum-states, related to the difference of the spacetime curvature of these states in the universe's fine-scale structure. He suggested that at the Planck scale, curved spacetime is not continuous, but discrete and that each separated quantum superposition has its own piece of spacetime curvature, a blister in spacetime. Penrose suggests that gravity exerts a force on these spacetime blisters, which become unstable above the Planck scale of and collapse to just one of the possible states. Atomic-level superpositions would require 10 million years to reach OR threshold, while an isolated 1 kilogram object would reach OR threshold in 10−37s. Objects somewhere between these two scales could collapse on a timescale relevant to neural processing. An essential feature of Penrose's theory is that the choice of states when objective reduction occurs is selected neither randomly nor algorithmically. Rather, states are selected by a "non-computable" influence embedded in the Planck scale of spacetime geometry, which in "The Emperor's New Mind" (Penrose 1989)  he associated with conscious human reasoning.

 

Spontaneous random collapse models GRW (Ghirardi, Rimini, & Weber 1986) include an extra factor complementing the Schrödinger equation forcing random collapse over a finite time. Both Penrose’s gravitationally induced collapse and the variants of GRW theories such as continuous spontaneous localisation (CSL) involving gradual, continuous collapse rather than a sudden jump have recently been partially eliminated by experiments derived from neutrino research which have failed to detect the very faint x-ray signals the local jitter of physical collapse models imply.

 

In the approach of SED (de la Peña et al. 2020), the stochastic aspect corresponds to the effects of the collapse process into the classical limit, but here consciousness can be represented by the zero point field (ZPF) (Keppler 2018). Finally we have pilot waves [27] (Bohm 1952), which identify particles as having real positions, thus not requiring wave function collapse, but have problems with handling creation of new particles. Images of such trajectories can be seen in weak quantum measurement and surreal Bohmian trajectories in fig 57. However, the pilot wave theory has a fundamental problem with particle creation because stipulating a position for the particle doesn't work if a high energy photon then undergoes a creation event and then splits into and electron and a positron, where we have a potentially infinite number of possible trajectories of the ensuing particles. Attempts to solve this (Duerr et al. 2004, 2005, Nikolic, H 2010) involve two problems (1) the configuration space spits into a potentially infinite number of configurations and (2) A stochastic process is arbitrarily introduced for creation..

 

David Albert (1992), in "Quantum Mechanics and Experience", cites objections to virtually all descriptions of collapse of the wave function. In terms of von Neumann's original definition, which allowed for collapse to take place any point from the initial event to the conscious observation of it, what he concluded was that there must be two fundamental laws about how the states of quantum-mechanical systems evolve:

 

Without measurements all physical systems invariably evolve in accordance with the dynamical equations of motion, but when there are measurements going on, the states of the measured systems evolve in accordance with the postulate of collapse. What these laws actually amount to will depend on the precise meaning of the word measurement. And it happens that the word measurement simply doesn't have any absolutely precise meaning in ordinary language; and it happens (moreover) that von Neumann didn't make any attempt to cook up a meaning for it, either.

 

However, if collapse always occurs at the last possible moment, as in Wigner's (1961) view:

 

All physical objects almost always evolve in strict accordance with the dynamical equations of motion. But every now and then, in the course of some such dynamical evolutions, the brain of a sentient being may enter a state wherein states connected with various different conscious experiences are superposed; and at such moments, the mind connected with that brain  opens its inner eye, and gazes on that brain, and that causes the entire system (brain, measuring instrument, measured system, everything) to collapse, with the usual quantum-mechanical probabilities, onto one or another of those states; and then the eye closes, and everything proceeds again in accordance with the dynamical equations of motion.

 

We thus end up with either purely physical systems, which evolve in accordance with the dynamical equations of motion or conscious systems which do contain sentient observers. These systems evolve in accordance with the more complicated rules described above. ... So in order to know precisely how things physically behave, we need to know precisely what is conscious and what isn't. What this "theory" predicts will hinge on the precise meaning of the word conscious; and that word simply doesn't have any absolutely precise meaning in ordinary language; and Wigner didn't make any attempt to make up a meaning for it; and so all this doesn't end up amounting to a genuine physical theory either.

 

But he also discounts related theories relating to macroscopic processes:

 

All physical objects almost always evolve in strict accordance with the dynamical equations of motion. But every now and then, in the course of some such dynamical evolutions (in the course of measurements, for example), it comes to pass that two macroscopically different conditions of a certain system (two different orientations of a pointer, say) get superposed, and at that point, as a matter of fundamental physical law, the state of the entire system collapses, with the usual quantum-mechanical probabilities, onto one or another of those macroscopically different states.  But then we again have two sorts of systems microscopic and macroscopic and again we don't precisely know what macroscopic is.

 

He even goes to the trouble of showing that no obvious empirical test can distinguish between such variations, including decoherence e.g. from air molecules, and with the GRW theory, where other problems arise about the nature and consequences of collapse on future evolution.

 

Tipler (2012, 2014), using quantum operators, shows that, in the many worlds interpretation, quantum non-locality ceases to exist because the first measurement of an entangled pair, e.g. spin up or down, splits the multiverse into two deterministic branches, in each of which the state of the the second particle is determined to be complementary in each multiverse branch, so no nonlocal "spooky action a a distance" needs, or can take place.

 

This also leads to a fully-deterministic multiverse:

 

Like the electrons, and like the measuring apparatus, we are also split when we read the result of the measurement, and once again our own split follows the initial electron entanglement. Thus quantum nonlocality does not exist. It is only an illusion caused by a refusal to apply quantum mechanics to the macroworld, in particular to ourselves.

 

Many-Worlds quantum mechanics, like classical mechanics is completely deterministic. So the observers have only the illusion of being free to chose the direction of spin measurement. However, we know my experience that there are universes of the mutilverse in which the spins are measured in the orthogonal directions, and indeed universes in which the pair of directions are at angles θ at many values between 0 and π/2 radians. To obtain the Bell Theorem quantum prediction in this more general case, where there will be a certain fraction with spin in one direction, and the remaining fraction in the other, requires using Everett’s assumption that the square of the modulus of the wave function measures the density of universes in the multiverse.

 

There is a fundamental problem with Tipler’s explanation. The observer is split into one that observes the cat alive and the other observes it dead. So everything is split. Nelson did and didn’t win the battle of Copenhagen by turning his blind eye, so Nelson is also both a live and dead Schrödinger cat. The same for every idiosyncratic conscious decision we make, so history never gets made. Free will ceases to exist and quantum measurement does not collapse the wave function. So we have a multiverse of multiverses with no history at all. Hence no future either.

 

This simply isn’t in any way how the real universe manifests. The cat IS alive or dead. The universe is superficially classical because so many wave functions have collapsed or are about to collapse that the quantum universe is in a dynamical state of creating superpositions and collapsing nearly all of them, as the course of history gets made. This edge of chaos dynamic between collapse and wave superposition allows free will to exist within the cubic centimetre of quantum uncertainty.  We are alive. Subjective conscious experience is alive and history is being unfolded as I type.

 

Nevertheless the implications of the argument are quite profound in that both a fully quantum multiverse and a classical universe are causally deterministic systems, showing that the capacity of subjectively conscious free-will to throw a spanner in the works comes from the interface we experience between these two deterministic extremes.

 

Transactional Interpretations: Another key interpretation which extends the Feynman description to real particle exchanges is the transactional interpretation TI (Cramer 1986, King 1989, Kastner 2012, Cramer & Mead 2020) where real quanta are also described as a hand-shaking between retarded (usual time direction) and advanced (retrocausal) waves from the absorber, called “offer” and “confirmation” waves.  TI arose from the Wheeler-Feynman (WF) time-symmetric theory of classical electrodynamics (Wheeler and Feynman 1945, 1949, Feynman 1965), which proposed that radiation is a time-symmetric process, in which a charge emits a field in the form of half-retarded, half-advanced solutions to the wave equation, and the response of absorbers combines with that primary field to create a radiative process that transfers energy from an emitter to an absorber.

 


Fig 72: (1) In TI a transaction is established by crossed phase advanced and retarded waves. (2) The superposition of these between the emitter and absorber results in a real quantum exchanged between emitter P and future absorber Q. (3) The origin of the positive energy arrow of time envisaged as a phase reflecting boundary at the cosmic origin (Cramer 1983). (4) Pair splitting entanglement can be explained by transactional handshaking at the common emitter. (5) The treatment of the quantum field in PTI is explained by assigning a different status to the internal virtual particle transactions (Kastner 2012). (6) A real energy emission in which time has broken symmetry involves multiple transactions between the emitter and many potential absorbers with collapse modelled as a symmetry breaking, in which the physical weight functions as the probability of that particular process as it competeswith other possible processes (Kastner 2014). (7) Space time emerging from a transaction (Kastner 2021a). (8) Entanglement experiment with time varying analysers (Aspect et al. 1982). A calcium atom emits two entangled photons with complementary polarisation each of which travels to one of two detectors oscillating so rapidly there is no time to send information at the speed of light between the two detector pairs. (9) The blue and yellow photon transitions. (10) The quantum correlations blue exceed Bell’s limits of communication between the two at the speed of light. The experiment is referred to as EPR after Einstein, Podolsky and Rosen who first suggested the problem of spooky action at a distance.

 

The only non-paradoxical way entanglement and its collapse can be realised physically, especially in the case of space-like separated detectors, as in fig 72(8) is this:

 

(A) The closer detector, say  No. 1 destructively collapses the entanglement at (1) sending a non-entangled advanced confirmation wave back in time to the source.

(B) The arrival of the advanced wave at the source collapses the wave right at source, so that the retarded wave from the source is no longer entangled although it was prepared as entangled by the experimenter. This IS instantaneous but entirely local.

(C) The retarded offer wave from the Bell experiment is no longer actually entangled and is sent at light speed to detector 2 where if it is detected it immediately has complementary polarisation to 1.

(D) If detector 1 does not record a photon at the given angle no confirmation wave is sent back to the source, so no coincidence measurement can be made.

(E) The emitted retarded wave will remain entangled unless photon 1 is or has been absorbed by another atom but then no coincidence counts will be made either.

(F) The process is relativistically covariant. In an experimenter frame if relative motion results in detector 2 sampling first, the roles of 1 and 2 become exchanged and the same explanation follows.

 

Every detection at (2) either collapses the entangled wave, or the already partially collapsed single particle wave function as in (B): If no detection has happened at 1, or anywhere else, the retarded source wave is still entangled, and detector 2 may sample it and collapse the entanglement. If a detection of photon 1 has happened elsewhere or at detector 1 the retarded source wave is no longer entangled, as in B above and then detector 2, if it samples photon 2, also collapses this non-entangled single particle wave function.

 

So there is no light-speed violating paradox but there is a deeper paradox about advanced and retarded waves in space time in the transactional principle. This as far as I can see gives the complete true real time account of how the universe actually deals with entanglement, not the fully collapsed statistical result the experimenter sees, and figures the case is already closed.

 

The standard account of the Bell theorem experiment, as in Fig 72(8) cannot explain how the universe actually does it, only that the statistical correlation agrees with the sinusoidal angular dependence of quantum reality and violates the Bell inequality. The experimenter is in a privileged position to overview the total data and can conclude this with no understanding of how an entangled wave function they prepared can arrive at detector 2 unentangled when photon 1 has already been absorbed.

 

Richard Feynman's (1965) Nobel Lecture "The Development of the Space-Time View of Quantum Electrodynamics" opened the whole transactional idea of advanced and retarded waves twenty years before Cramer (1983) did. It enshrines the very principle before QED got completed as the most accurate theory ever.

 

The same applies to single particle wave functions, where collapse of the wave function on absorption has to paradoxicially result in a sudden collapse of the wave function to zero even at space-like intervals from the emission and absorption loci, but the advanced and retarded confirmation and offer waves. Quantum mechanics also allows events to happen with no definite causal order (Goswami et al. 2018).

 

As just noted, the process of wave function collapse has generally been considered to violate Lorenz relativistic invariance (Barrett  1999 p44-45):

 

The standard collapse theory, at least, really is incompatible with the theory of relativity in a perfectly straightforward way: the collapse dynamics is not Lorentz- covariant. When one finds an electron, for example, its wave function instantaneously goes to zero everywhere except where one found it. If this did not happen, then there would be a nonzero probability of finding the electron in two places at the same time in the measurement frame. The problem is that we cannot describe this process of the wave function going to zero almost everywhere simultaneously in a way that is compatible with relativity. In relativity there is a different standard of simultaneity for each inertial frame, but if one chooses a particular inertial frame in order to describe the collapse of the wave function, then one violates the requirement that all physical processes must be described in a frame-independent way.  

 

Ruth Kastner  (2021a,b) elucidates the relativistic transactional interpretation, which claims to resolve this through causal sets (Sorkin 2003) invoking a special-relativistic theory encompassing both real particle exchange and collapse:

 

In formal terms, a causal set C is a finite, partially ordered set whose elements are subject to a binary relation that can be understood as precedence; the element on the left precedes that on the right. It has the following properties:

 

(i) transitivity: (x, y, z C)(x y z x z)
(ii) irreflexivity: (x C)(x ~ x)
(iii) local finiteness: (x, z C) (cardinality { y C | x y z } < ∞)

 

Properties (i) and (ii) assure that the set is acyclic, while (iii) assures that the set is discrete. These properties yield a directed structure that corresponds well to temporal becoming, which Sorkin describes as follows:

 

In Sorkin’s construct, one can then have a totally ordered subset of connected links (as defined above), constituting a chain. In the transactional process, we naturally get a parent/child relationship with every transaction, which defines a link. Each actualized transaction establishes three things: the emission event E, the absorption event A, and the invariant interval I(E,A) between them, which is defined by the transferred photon. Thus, the interval I(E,A) corresponds to a link. Since it is a photon that is transferred, every actualized transaction establishes a null interval, i.e., ds2 = c2t2 − r2 = 0 . The emission event E is the parent of the absorption event A (and A is the child of E).

 

A major advantage of the causal set approach as proposed by Sorkin and collaborators … is that it provides a fully covariant model of a growing spacetime. It is thus a counterexample to the usual claim (mentioned in the previous section) that a growing spacetime must violate Lorentz covariance. Specifically, Sorkin shows that if the events are added in a Poissonian manner, then no preferred frame emerges, and covariance is preserved (Sorkin 2003, p. 9).  In RTI, events are naturally added in a Poissonian manner, because transactions are fundamentally governed by decay rates (Kastner and Cramer, 2018).

 

Ruth Kastner comments in private communication in relation to her development of the transactional interpretation:

 

The main problem with the standard formulation of QM is that consciousness is brought in as a kind of 'band-aid' that does not really work to resolve the Schrodinger's Cat and Wigner's Friend of paradoxes. The transactional picture, by way of its natural non-unitarity (collapse under well-quantified circumstances), resolves this problem and allows room for consciousness to play a role as the acausal/volitional influence that corresponds to efficacy (Kastner 2016). My version of TI, however, is ontologically different from Cramer’s and it also is fully relativistic (Kastner 2021a,b). For specifics on why many recent antirealist claims about the world as alleged implications of Wigner's Friend are not sustainable, see Kastner (2021c). In particular, standard decoherence does not yield measurement outcomes, so one really needs real non-unitarity in order to have correspondence with experience. I have also shown that the standard QM formulation, lacking real non-unitarity, is subject to fatal inconsistencies (Kastner 2019, 2021d). These inconsistencies appear to infect Everettian approaches as well.

 

Kastner (2011) explains the arrow of time as a foundational quantum symmetry-breaking:

 

Since the direction of positive energy transfer dictates the direction of change (the emitter loses energy and the absorber gains energy), and time is precisely the domain of change (or at least the construct we use to record our experience of change), it is the broken symmetry with respect to energy propagation that establishes the directionality or anisotropy of time. The reason for the arrow of timeis that the symmetry of physical law must be broken: the actual breaks the symmetry of the potential. It is often viewed as a mystery that there are irreversible physical processes and that radiation diverges toward the future. The view presented herein is that, on the contrary, it would be more surprising if physical processes were reversible, because along with that reversibility we would have time-symmetric (isotropic) processes, which would fail to transfer energy, preclude change, and therefore render the whole notion of time meaningless.

 

Kastner is a possibilist who argues that OWs and CWs are possibilities that are "real." She says that they are less real than actual empirically measurable events, but more real than an idea or concept in a person's mind. She suggests the alternate term "potentia," Aristotle's term that she found Heisenberg had cited. For Kastner, the possibilities are physically real as compared to merely conceptually possible ideas that are consistent with physical law. But she says the "possibilities" described by offer and confirmation waves are "sub-empirical" and pre-spatiotemporal (i.e., they have not shown up as actual in spacetime). She calls these "incipient transactions.” She calls for a new metaphysical category to describe "not quite actual...possibilities."

 

Kastner (2012, 2014b) sets out the basis for extending the possibilist transactional interpretation or PTI, to the relativistic domain in relativistic transactional interpretation or RTI. This modified version proposes that offer and confirmation waves (OW and CW) exist in a sub-empirical, pre-spacetime realm (PST) of possibilities, and that it is actualised transactions which establish empirical spatiotemporal events. PTI proposes a growing universe picture, in which actualised transactions are the processes by which spacetime events are created from a substratum of quantum possibilities. The latter are taken as the entities described by quantum states (and their advanced confirmations); and, at a subtler relativistic level, the virtual quanta. PTI proposes a growing universe picture, in which actualised transactions are the processes by which spacetime events are created from a substratum of quantum possibilities.

 

The basic idea is that offers and confirmations are spontaneously elevated forms of virtual quanta, where the probability of elevation is given by the decay rate for the process in question. In the direct action picture of PTI, an excited atom decays because one of the virtual photon exchanges ongoing between the excited electron and an external absorber (e.g. electron in a ground state atom) is spontaneously transformed into a photon offer wave that generates a confirming response. The probability for this occurrence is the product of the QED coupling constant α and the associated transition probability. In quantum field theory terms, the offer wave corresponds to a free photonor excited state of the field, instantiating a Fock space state (Kastner 2014b).

 

In contrast, with standard QFT where the amplitudes over all interactions are added and then squared under the Born rule, according to PTI , the absorption of the offer wave generates a confirmation (the response of the absorber), an advanced field. This field can be consistently reinterpreted as a retarded field from the vantage point of an observercomposed of positive energy and experiencing events in a forward temporal direction. The product of the offer (represented by the amplitude) and the confirmation (represented by the amplitudes complex conjugate) corresponds to the Born Rule.

  

Kastner (2014a, 2021c,d) deconstructs decoherence as well as quantum Darwinism, refuting claims that the emergence of classicality proceeds in an observer-independent manner in a unitary-only dynamics, noting that quantum Darwinism holds that the emergence of classicality is not dependent on any inputs from observers, but that it is the classical experiences of those observers that the decoherence program seeks to explain from first principles:

 

“in the Everettian picture, everything is always coherently entangled, so pure states must be viewed as a fiction -- but that means that it is also fiction that the putative 'environmental systems' are all randomly phased. In helping themselves to this phase randomness, Everettian decoherentists have effectively assumed what they are trying to prove: macroscopic classicality only emergesin this picture because a classical, non-quantum-correlated environment was illegitimately put in by hand from the beginning. Without that unjustified presupposition, there would be no vanishing of the off-diagonal terms”

 

She extends this to an uncanny observation concerning the Everett view:

 

"That is, MWI does not explain why Schrodingers Cat is to be viewed as ‘alive’ in one world and deadin another, as opposed to alive + deadin one world and alive deadin the other.”

 

Kastner (2016a) notes that the symmetry-breaking of the advanced waves provides an alternative explanation to von Neumann’s citing of the consciousness of the observer in quantum measurement:

 

Von Neumann noted that this Process 1 transformation is acausal, nonunitary, and irreversible, yet he was unable to explain it in physical terms. He himself spoke of this transition as dependent on an observing consciousness. However, one need not view the measurement process as observer-dependent. … The process of collapse precipitated in this way by incipient transactions [competing probability projection operator weightings of the] absorber response(s) can be understood as a form of spontaneous symmetry breaking.

 

Kastner & Cramer (2018) confirm this picture:

And since not all competing possibilities can be actualized, symmetry must be broken at the spacetime level of actualized events. The latter is the physical correlate of non-unitary quantum state reduction.

 

However, in Kastner (2016b), Ruth considers observer participation as integral, rejecting two specific critiques of libertarian, agent-causal free will: (i) that it must be anomic or antiscientific; and (ii) that it must be causally detached from the choosing agent. She asserts that notwithstanding the Born rule, quantum theory may constitute precisely the sort of theory required for a nomic grounding of libertarian free will.

 

Kastner cites Freeman Dyson’s comment rejecting epiphenomenalism:

 

I think our consciousness is not just a passive epiphenomenon carried along by the chemical events in our brains, but is an active agent forcing the molecular complexes to make choices between one quantum state and another. In other words, mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call chancewhen they are made by electrons.

 

Kastner then proposes, not just a panpsychic quantum reality but a pan-volitional basis for it:

 

Considering the elementary constituents of matter as imbued with even the minutest propensity for volition would, at least in principle, allow the possibility of a natural emergence of increasingly efficacious agent volition as the organisms composed by them became more complex, culminating in a human being. And allowing for volitional causal agency to enter, in principle, at the quantum level would resolve a very puzzling aspect of the indeterminacy of the quantum lawsthe seeming violation of Curies Principle in which an outcome occurs for no reason at all. This suggests that, rather than bearing against free will, the quantum laws could be the ideal nomic setting for agent-causal free will.

 

Kastner, Kauffman & Epperson (2018) formalise the relationship between potentialities and actualities into a modification of Descartes res cogitans (purely mental substance) and res extensa (material substance) to res potentiae and res extensa comprising the potential and actual aspects of ontological reality. Unlike Cartesian dualism these are not separable or distinct but are manifest in all situations where the potential becomes actual, particularly in the process of quantum measurement in PTI,  citing (McMullin 1984) on the limits of imagination of the res potentiae:  

 

… imaginability must not be made the test for ontology. The realist claim is that the scientist is discovering the structures of the world; it is not required in addition that these structures be imaginable in the categories of the macroworld.

 

They justify this by noting that human evolutionary survival has depended on dealing with the actual, so the potential may not be imaginable in our conscious frame of reference, however one can note that the strong current of animism in human cultural history suggests a strong degree of focus on the potential, and its capacity to become actual in hidden unpredictable sources of accident or misfortune. In addition to just such unexpected real world examples, they they note the applicability of this to a multiplicity of quantum phenomena:

 

Thus, we propose that quantum mechanics evinces a reality that entails both actualities (res extensa) and potentia (res potentia), wherein the latter are as ontologically significant as the former, and not merely an epistemic abstraction as in classical mechanics. On this proposal, quantum mechanics IS about what exists in the world; but what exists comprises both possibles and actuals. Thus, while John Bells insistence on beablesas opposed to just observablesconstituted a laudable return to realism about quantum theory in the face of growing instrumentalism, he too fell into the default actualism assumption; i.e., he assumed that to ‘be’ meant to be actual,so that his beableswere assumed to be actual but unknown hidden variables.

 

What the EPR experiments reveal is that while there is, indeed, no measurable nonlocal, efficient causal influence between A and B, there is a measurable, nonlocal probability conditionalization between A and B that always takes the form of an asymmetrical internal relation. For example, given the outcome at A, the outcome at B is internally related to that outcome. This is manifest as a probability conditionalization of the potential outcomes at B by the actual outcome at A.

 

Nonlocal correlations such as those of the EPR entanglement experiment below can thus be understood as a natural, mutually constrained relationship between the kinds of spacetime actualities that can result from a given possibility – which itself is not a spacetime entity. She quotes Anton Zellinger (2016):

 

..it appears that on the level of measurements of properties of members of an entangled ensemble, quantum physics is oblivious to space and time .  

 

Kastner (2021b), considers how the spacetime manifold emerges from a quantum substratum through the transactional process (fig 72(6)), in which spacetime events and their connections are established. The usual notion of a background spacetime is replaced by the quantum substratum, comprising quantum systems with non-vanishing rest mass, corresponding to internal periodicities that function as internal clocks defining proper times and in turn, inertial frames that are not themselves aspects of the spacetime manifold.

 

Three years after John Cramer published the transactional interpretation, I wrote a highly speculative paper, “Dual-time Supercausality (King 1989, Vannini 2006), based on John’s description which says many of the same things emergent in Ruth Kastner’s far more comprehensive development. Summing up the main conclusions we have:

 

(1) Symmetric-Time: This mode of action of time involves a mutual space-time relationship between emitter and absorber. Symmetric-time determines which, out of the ensemble of possibilities predicted by the probability interpretation of quantum mechanics is the actual one chosen. Such a description forms a type of hidden-variable theory explaining the selection of unique reduction events from the probability distribution. We will call this bi-directional causality transcausality.

(2) Directed-time: Real quantum interaction is dominated by retarded-time, positive-energy particles. The selection of temporal direction is a consequence of symmetry-breaking, resulting from energy polarization, rather than time being an independent parameter. The causal effects of multi-particle ensembles result from this dominance of retarded radiation, as an aspect of symmetry-breaking.

 

Dual-time is thus a theory of the interaction of two temporal modes, one time-symmetric which selects unique events from ensembles, and the other time-directed which governs the consistent retarded action of the ensembles. These are not contradictory. Each on their own form an incomplete description. Temporal causality is the macroscopic approximation of this dual theory under the correspondence principle. The probability interpretation governs the incompleteness of directed-causality to specify unique evolution in terms of initial conditions.

 

Quantum-consciousness has two complementary attributes, sentience and intent:

(a) Sentience represents the capacity to utilise the information in the advanced absorber waves and is implicitly transcausal in its basis. Because the advanced components of symmetric-time cannot be causally defined in terms of directed-time, sentience is complementary to physically-defined constraints.

(b) Intent represents the capacity to determine a unique outcome from the collection of such absorber waves, and represents the selection of one of many potential histories. Intent addresses the two issues of free-will and the principle of choice in one answer – free-will necessarily involves the capacity to select one out of many contingent histories and the principle of choice manifests the essential nature of free-will at the physical level.

 

The transactional interpretation presents a  unique view of cosmology,  involving an implicit space-time anticipation in which a real exchange, e.g. a photon emitted by a light bulb and absorbed on a photographic plate or elsewhere, or a Bell type entanglement experiment with two detectors, is split into an offer wave from the emitter and retro-causal confirmation waves from the prospective absorbers that, after the transaction is completed, interfere to form the real photon confined between the emission and absorption vertices. We also  experience these retro-causal effects in weak quantum measurement, and delayed choice experiments.

 

To get a full picture of this process, we need to consider the electromagnetic field as a whole, in which these same absorbers are also receiving offer waves form other emitters, so we get a network of  virtual emitter-absorber pairs.

 

There is a fundamental symmetry between creation and annihilation, but there is a sting in the measurement tail. When we do an interference experiment, with real positive energy photons, we know each photon came from the small region within the light source, but the locations of the potential absorbers affected by the wave function are spread across the world at large. The photon could be absorbed anywhere on the photographic plate, or before it, if it hits dust in the apparatus, or after if it goes right through the plate and out of the apparatus altogether, just as radioactive particles escape the exponential potential barrier of the nucleus. The problem concerning wave function collapse is which absorber?

 

In all these cases once a potential absorber becomes real, all the other potential absorbers have zero probability of absorption, so the change occurs instantaneously across space-time to other prospective absorbers, relative to the successful one. This is the root problem of quantum measurement. Special relativistic quantum field theory is time symmetric, so solving wave function collapse is thus most closely realised in the transactional interpretation, where the real wave function is neither the emitter's spreading linear retarded wave, nor any of the prospective absorbers’ linear advanced waves, but the results of a phase transition, in which all these hypothetical offer and confirmation waves resolve into one or more real wave functions linking creation and annihilation vertices. It is the nature of this phase transition and its non-linearity which holds the keys to life the universe and everything and potentially the nature of time itself.

 

The entire notion in Bell experiments, where communication between absorbers appears to be impossibly instantaneous, invoking super-luminal communication, is unnecessary because the retrocausal confirmation wave perfectly cancels the time elapse of the offer wave, so if detector 1 samples first, its confirmation goes back to the source photon-splitter arriving at the same time as the original emission and the offer wave collapses to a single photon emission to detector 2 which arrives there at exactly the time when 2 should have sampled the complementary polarisation, with this information as required. No superluminal interactions between absorbers occurs even if it looks like the process was instantaneous and would have to involve infinite velocity. This looks instantaneous without contradiction because of the time elapse cancellations, but if we follow it as a process, it is some kind of non-linear phase transition from a “plasma” state of offers and confirmations collapsing into a set of real photons with phonon like real excitations connecting them.

 

In Symbiotic Existential Cosmology this is envisaged as allowing a form of prescience because the collapse has implicit information about the future state of the universe in which the absorber exist. This may appear logically paradoxical but no classical information is transferred, so there is no inconsistency. Modelling the collapse appears to happen outside space-time, but actually it is instantaneous, so dual-time is just a core part of the heuristic to understand the non-linear process. This depends on transactional collapse being a non-random hidden-variable theory in which non-local correlations of the universal wave function manifest as a complex system during collapse in a way that looks deceptively like randomness because it is a complex chaotic ergodic process.

 

My perspective is that subjective conscious physical volition has to imbue an evolutionary advantage, or it would be evolutionarily unstable and be discarded by neutral evolution, but this advantage has to involve real time anticipation of existential threats to survival. So I favour the transactional interpretation, in which a real particle e.g. a photon is a superposition of a causal “offer wave” from an emitter complemented by potential retrocausal “confirmation waves” from absorbers. This is actually necessary, because the emission wave is a linear Schrödinger wave that spreads, but a real photon is an excitation between an emitter and an absorber, more like a simple harmonic phonon, non-linear in space with two foci as in fig 73.

 

Fig 73: A transaction modelled by a phase transition from a virtual plasma to a real interactive solid spanning space-time, in which the wave functions have now become like the harmonic phonons of solid state physics.

 

I remain intrigued by the transactional principle because I am convinced that subjective consciousness is a successful form of quantum prediction in space-time that has enabled single-celled eucaryotes to conquer the biosphere before there were brains, which have evolved based on intimately-coupled societies of such cells (neurons and neuroglia) now forming the neural networks neuroscience tries to understand in classical causal terms.

 

The eucaryote endo-symbiosis in this view marks a unique discrete topological transformation of the membrane to unfold attentive sentient consciousness invoking the second stage of cosmological becoming that ends up being us wondering what the hell is going on here? This is the foundation of emergence as quantum cosmology and it explains why we have the confounding existential dilemma we do have and why it all comes back to biospheric symbiosis being the centre of the cyclone of survival for us as a climax species.

 

The full picture of a transaction process is a population of real, or potential emitters in excited states and potential ground state absorbers, with their offer and confirmation wave functions extending throughout space time, as in the Feynman representation. As the transaction proceeds, this network undergoes a phase transition from a “virtual plasma” state to a “real solid”, in which the excited emitters are all paired with actual absorbers in the emitters’ future at later points in space-time. This phase transition occurs across space-time– i.e. transcausally – covering both space-like and time-like intervals. It has many properties of a phase transition from plasma to solid, with a difference – the strongest interactions don’t win, except with a probability determined by the relative power of the emitter’s wave amplitudes at the prospective absorption event. This guarantees the transaction conforms to the emitter’s probability distribution and the absorber's one as well.   If a prospective absorber has already interacted with another emitter, it will not appear in the transaction network at this space-time point, so ceases to be part of the collective transaction. Once this is the case, all other prospective absorbers of a given emitter scattered throughout space-time, both in the absorber’s past and future, immediately have zero probability of absorption from any of the emitters and no causal conflict, or time loop arises.

 

Here is the problem. The transition is laterally across the whole of space-time, not along the arrow of time in either direction, so cannot exist within space-time and really needs a dual time parameter. This is why my 1989 paper was entitled “dual-time super-causality”.

 

Now this doesn’t mean a transaction is just a random process. Rather, it is a kind of super-selection theory, in which the probability of absorption at an absorber conforms to the wave probability but the decision making process is spread between all the prospective absorbers distributed across space-time, not just an emitter-based random wave power normalised probability. The process is implicitly retro-causal in the same way weak quantum measurement and Wheeler’s delayed choice experiments are.

 

The fact that in the cat paradox experiment, we see only a live or dead cat and not a superposition doesn’t mean however, that conscious observers witness only a classical world view. There are plenty of real phenomena in which we do observe quantum superpositions, including quantum erasure and quantum recoherence, where entangled particles can be distinguished collapsing the entanglement, and then re-entangled. A laser consists of excited atoms above the ground state which can be triggered to coherently emit photons indistinguishably entangled in a superposition of in-phase states stimulated by a standing wave in the laser caught between pairs of reflecting mirrors, so we see the bright laser light and know it is a massive superimposed set of entangled photons.

 

In all forms of quantum entanglement experiment, when the state of one of the pair is detected, the informational outcome is “transmitted” instantaneously to the other detector so that the other particle’s state is definitively complementary, although the detectors can be separated by space-like as well as time-like intervals, although this transmission cannot be used to relay classical information. This again is explained by the transactional interpretation, because the confirmation wave of the first detector of the pair is transmitted retro-causally back to the source event where the splitting occurred and then causally out to the second detector where it now has obligately complementary spin or polarisation when detection occurs.

 

What the transactional interpretation does provide is a real collapse process in which the universe is neither stranded in an Everett probability multiverse, nor in a fully collapsed classical state, but can be anywhere in between, depending on which agents are dong the measuring in a given theory. Nor is collapse necessarily random and thus meaningless, but is a space-time spanning non-linear phase transition, involving bidirectional hand-shaking between past and future. The absorbers are all in an emitter’s future so there is a musical chairs dance happening in the future. And those candidates may also be absorbers of other emitters and so on, so one can’t determine the ultimate boundary conditions of this problem. Somehow the “collapse”, which we admit violates retarded causality, results in one future choice. This means that there is no prohibition on this being resolved by the future affecting the outcome because the actual choice has no relation to classical causality.

 

The only requirement is that individual observations are asymptotic to the Born probability interpretation normalised by the wave function power  φ . φ*, but this could arise from a variety of complex trans-deterministic quasi-random processes, where multiple entanglements generate effective statistical noise, while having a basis in an explicit  hidden variable theory. The reason for the Born asymptote could thus be simply that the non-linear phase transition of the transaction, like the cosmic wave function of the universe, potentially involves everything there is the ultimate pseudo-random optimisation process concealing a predictive hidden variable theory. One should point out that that near universal assumption that the probability interpretation implies pure randomness normalised by the wave power  has as much onus on scientific proof as does any hidden variable theory, such as transactional collapse.

 

Hidden variable theories assert that there is a process underlying quantum uncertainty, which is by default assumed to be “random”, but the onus on scientific proof lies as much with establishing such a source of “pure” randomness in the universe, as it does finding a succinct hidden variable theory transcending those like the pilot wave theory in a full explanation. The transcausal aspects of transactional quantum collapse may make such a naked theory impossible to establish, meaning that both the hidden TOE and assumed randomness become undecidable propositions which intuition can penetrate empirically, but logical proof cannot.

 

It is also one in which subjective conscious volition and meaning can become manifest in cosmic evolution, in which the universe is in a state of dynamic ramification and collapse of quantum superpositions. The key point here is that subjective conscious volition needs to have an anticipatory property in its own right, independent of brain mechanisms like attention processes, or it will be neutral to natural selection, even if we do have free will, and would not have been selected for, all the way from founding eucaryotes to Homo sapiens. The transactional interpretation, by involving future absorbers in the collapse process, provides just such an anticipatory feature.

 

It is one thing to have free will and its another to use free will for survival on the basis of (conscious) prediction, or anticipation. Our conscious brains are striving to be predictive to the extent that we are subject to flash-lag perceptual illusions where perceptual processes attempt, sometimes incorrectly, to predict the path of rapidly moving objects (Eagleman & Sejnowski 2000), so the question is pivotal. Anticipating future threats and opportunities is key to how we evolved as conscious organisms, and this is pivotal over short immediate time scales, like the snake’s or tiger’s strike which we survive. Anticipating reality in the present is precisely what subjective consciousness is here to do.

 

The hardest problem of consciousness is thus that, to be conserved by natural selection, subjective consciousness (a) has to be volitional i.e. affect the world physically to result in natural selection and (b) it has to be predictive as well. Free-will without predictivity is neutral to evolution, just like random behaviour, and it will not be selected for. If we are dealing with classical reality, we could claim this is merely a computational requirement, but why then do we have subjective experience at all? Why not just recursive predictive attention processes with no subjectivity?

 

Here is where the correspondence between sensitive dynamic instability at tipping points and quantum uncertainty comes into the picture. We know biology and particularly brain function is a dynamically unstable process, with sensitive instabilities that are fractal down to the quantum level of ion channels, enzyme molecules whose active sites are enhanced by quantum tunnelling and the quantum parallelism of molecular folding and interactive dynamics. We also know that the brain dynamics operating close to the edge of chaos is convergent to dynamic crisis during critical decision-making uncertainties that do not have an obvious computational, cognitive, or reasoned disposition. We also know at these points that the very processes of sensitivity on existing conditions and other processes, such as stochastic resonance, can allow effects at the micro level approaching quanta to affect the outcome of global brain states.

 

And those with any rational insight can see that, for both theoretical and experimental reasons, classical causal closure of the universe in the context of brain dynamics is an unachievable quest. Notwithstanding Libet’s attempt, there is no technological way to experimentally achieve verification that the brain is causally closed and it flies in the face of the fractal molecular nature of biological processes at the quantum level.

 

Nevertheless we can understand that subjective conscious volition cannot enter into causal conflict with brain processes which have already established an effective computational outcome, as we do when we reach a prevailing reasoned conclusion, so free will is effectively restricted to situations where the environmental circumstances are uncertain, or not effectively computable, or perceived consciously to be anything but certain.

 

This in turn means that the key role of free will is not applying it to rationally or emotionally foregone conclusions but to environmental and strategic uncertainties, especially involving other conscious agents whose outcomes become part of quantum uncertainty itself.

 

The natural conclusion is that conscious free will has been conserved by evolution because it provides an evolutionary advantage at anticipating root uncertainties in the quantum universe and only these, including environmental and contextual uncertainties which are themselves products of quantum uncertainty amplified by unstable processes in the molecular universe such as quantum kinetic billiards. This seems almost repugnantly counter-intuitive, because we tend to associate quantum uncertainty and the vagaries of fate with randomness, but this is no more scientifically established than causal closure of the universe in the context of brain function. All the major events of history that are not foregone conclusions, result from conscious free will applied to uncertainty, such as Nelson turning his bind eye to the telescope, in the eventually successful Battle of Copenhagen.

 

So the question remains that when we turn to the role of subjective consciousness volition in quantum uncertainty, this comes down to not just opening the box of Schrödinger’s cat, but to anticipating uncertain events more often than random chance would predict in real life situations.

 

That is where the transactional approach comes into its own, because, while the future at the time of casting the emission die is an indeterminate set of potential absorbers, the retro-causal information contained in the transaction is implicitly revealing which future absorbers are actually able to absorb the real emitted quantum and hence information about the real state of the future universe, not just its probabilities at emission. Therefore the transaction is carrying additional implicit “encoded” information about the actual future state of the universe and what its possibilities are that can be critical for survival in natural selection.

 

Although, like the "transmission" of a detection to the other detector in an entanglement experiment cannot be used to transfer classical information faster than the speed of light, the same will apply to quantum transactions, but this doesn't mean they are random or have no anticipatory value, just that they cannot be used for causal deduction.

 

Because the "holistic" nature of conscious awareness is an extension of the global unstable excitatory dynamics of individual eucaryote cells to brain dynamics, a key aspect of subjective consciousness may be that it becomes sensitive to the wave-particle properties of quantum transactions with the natural environment in the process of cellular quantum sentience, involving sensitivity to quantum modes, including photons, phonons and molecular orbital effects constituting cellular vision, audition and olfaction. Expanded into brain processes, this cellular quantum dynamics then becomes integral to the binding of consciousness into a coherent whole.

 

If we view neurodynamics as a fully quantum process, in the most exotic quantum material in the universe, in which the wave aspects consist of parallel excitation modes representing the competing possibilities of response to environmental uncertainties. If there is an open and shut case on logical, or tactical grounds, this mode will win out pretty much in the manner of Edelman's (1987) neural Darwinism or Dennett's (1991) multiple drafts. In terms of quantum evolution, the non-conscious processes form overlapping wave functions, proceeding according to deterministic Schrödinger solutions, (von Neumann type 2 processes), but in situations where subjective consciousness becomes critical to make an intuitive decision, the brain dynamic approaches an unstable tipping point, in which system uncertainty becomes pivotal (represented in instability of global states which are in turn sensitive to fractal scales of instability to the molecular level. Subjective consciousness then intervenes causing an intuitive decision through a (type 1 von Neumann) process of wave function collapse of the superimposed modes.

 

From the inside, this feels like and IS a choice of "free-will" aka subjective conscious volition over the physical universe. From the outside, this looks like collapse of an uncertain brain process to one of its eigenfunction states which then become apparent. There is a very deep mystery in this process because the physical process looks and remains uncertain and indeterminate, but from inside, in complete contradiction, it looks and feels like the exercise of intentional will determining future physical outcomes. So in a fundamental way it is like a Schrödinger cat experiment in which the cat survives more often than not, i.e. we survive. Now that is a really confounding issue at the very nub of what conscious existence is about and why SEC has the cosmological axiom of subjectivity to resolve it, because it is a fundamental cosmological paradox otherwise. So we end up with the ultimate paradox of consciousness how can we not only predict future outcomes that are quantum uncertain but capitalise on the ones that promote our survival, i.e. throw a live cat more often that chance would dictate!

 

This is the same dilemma that SEC addresses in primal subjectivity and is also in Cathy Reason's theorem … from the physical point of view causal closure of the brain is an undecidable proposition because we can't physically prove conscious will has physical effect, but neither can we prove causal closure of the (classical) universe. On the other hand, as Cathy's theorem intimates, conscious self certainty implies we know we changed the universe. Certainty of will as well as certainty of self. So the subjective perspective is certain and the objective perspective is undecidable. In exactly the same way, the cat paradox outcome is uncertain and can't be hijacked physically, but the autonomous intentional will used to tip the uncertain brain state has confidence of overall efficacy. This is the key to consciousness, free-will and survival in the jungle when cognition stops dead because of all the other conscious agents rustling in the grass and threatening to strike, which are uncomputable because they too are conscious! It's also the key to Psi, but in a more marginal way because it's trying to pass this ability back into the physical, where it drifts towards the probability interpretation. That's why I accept it, but don't abuse the siddhis by declaring them!

 

Consciousness is retained by evolution because it is a product of a Red Queen neurodynamic race between predators and prey in a similar way to the way sexuality has become a self-perpetuating genetic race between parasites and hosts by creating individual variation, thus avoiding boom and bust epidemics.

 

Cramer (2022) notes a possible verifiable source of advanced waves:

 

In the 1940s, young Richard Feynman and his PhD supervisor John Wheeler decided to take the advanced solution seriously and to use it to formulate a new electromagnetism, now called Wheeler-Feynman absorber theory (WF).  WF assumes that an oscillating electric charge produces advanced and retarded waves with equal strengths. However, when the retarded wave is subsequently absorbed (in the future), a cancellation occurs that erases all traces of the advanced waves and their time-backward “advanced effects.” WF gives results and predictions identical to those of conventional electromagnetic theory. However, if future retarded-wave absorption is somehow incomplete, WF suggests that this absorption deficiency might produce experimentally observable advanced effects.

 

When Bajlo (2017) measurements on cold, clear, dry days, he made the observations as the Earth rotated and the antenna axis swept across the galactic center, where wave-absorption variations might occur, in a number of these measurements, he observed strong advanced signals (6.94 to 26.5 standard deviations above noise) that arrived at the downstream antenna a time 2D/c before the main transmitted pulse signal. Variations in the advanced-signal amplitude as the antenna axis swept across the galactic center were also observed. The amplitude was reduced up to 50% of off-center maximum when pointed directly at the galactic center (where more absorption is expected.) These results constitute a credible observation of advanced waves.

  

Fig 74: Wheeler (1983) delayed choice experiment shows that different forms of measurement after light from a distant quasar has been gravitationally lensed around an intervening galaxy can be determined to have passed one or the other way around it or a superposition of both, depending on whether detection of one or other particle, or an interference is made when it reaches Earth. (b, c) An experimental implementation of Wheeler's idea along a satellite-ground interferometer that extends for thousands of kilometers in space (Vedovato et al. 2017), using shutters on an orbiting satellite.

  

Superdeterminism: There is another interpretation of quantum reality called super-determinism (Hossenfelder & Palmer 2020), which has an intriguing relationship with retro-causality and still can admit free will, despite the seeming contradiction in the title. Bell's theorem assumes that the measurements performed at each detector can be chosen independently of each other and of the hidden variables that determine the measurement outcome: ρ(λ(a,b))=ρ(λ).

  

In a super-deterministic theory, this relation is not fulfilled ρ(λ(a,b))≠ρ(λ) because the hidden variables are correlated with the measurement settings. Since the choice of measurements and the hidden variable are predetermined, the results at one detector can depend on which measurement is done at the other without any need for information to travel faster than the speed of light. The assumption of statistical independence is sometimes referred to as the free choice or free will assumption, since its negation implies that human experimentalists are not free to choose which measurement to perform. But this is incorrect. What it depends on are the actual measurements made. For every possible pair of measurements a, b there is a predefined trajectory determined both by the particle emission and the measurement at the time absorption takes place. Thus in general the experimenter still has the free will to choose a, b or even to change the detector set up, as in the Wheeler delayed choice experiment in fig 74, and science proceeds as usual, but the outcome depends on the actual measurements made. In principle, super-determinism is untestable, as the correlations can be postulated to exist since the Big Bang, making the loophole impossible to eliminate. However this has an intimate relationship with the transactional interpretation and its implicit retro-causality, because it includes the absorbing conditions in the transaction, so the two are actually compatible.

 

Superdeterminism is distinct from theories which attempt to assert boundary conditions on the cosmic origin combined with time asymmetric laws about the cosmic wave function which would result in strong determinism, the notion that despite quantum uncertainty, the entire future of the universe is predetermined (Chen 2023).

 

Sabine (Hossenfelder 2020) points out exactly how superdeterminism can violate statistical independence:

 

I here want to explain how the strangeness disappears if one is willing to accept that one of the assumptions we have made about quantum mechanics is not realized in nature: Statistical Independence. Loosely speaking, Statistical Independence means that the degrees of freedom of spa- tially separated systems can be considered uncorrelated, so in a superdeterministic model they are generically correlated, even in absence of a common past cause. The way that Statistical Independence makes its appearance in superdeterminism is that the probability distribution of the hidden variables given the detector settings ρ(λ|θ) is not independent of the detector settings, ie ρ(λ|θ) ≠ ρ(λ). What this means is that if an experimenter prepares a state for a measurement, then the outcome of the measurement will depend on the detector settings. The easiest way to think of this is considering that both the detector settings, θ, and the hidden variables, λ, enter the evolution law of the prepared state. As a consequence, θ and λ will generally be correlated at the time of measurement, even if they were uncorrelated at the time of preparation. Superdeterminism, then, means that the measurement settings are part of what determines the outcome of the time-evolution of the prepared state. What does it mean to violate Statistical Independence? It means that fundamentally everything in the universe is connected with everything else, if subtly so. You may be tempted to ask where these connections come from, but the whole point of superdeterminism is that this is just how nature is. It's one of the fundamental assumptions of the theory, or rather, you could say one drops the usual assumption that such connections are absent. The question for scientists to address is not why nature might choose to violate Statistical Independence, but merely whether the hypothesis that it is violated helps us to better describe observations.

 

However note the "toy" superdeterministic hidden variable theory (Donadi & Hossenfelder 2022) uses “the master equation for one of the most common examples of decoherence amplitude damping in a two-level system”. But decoherence  is a theory in which an additional term is added to model the increasing probability of a quantum getting hit by another quantum and literally uses forced damping to suppress the entangled off diagonalcomponents of the wave function matrix.

 

Schreiber (1995) sums up the case for consciousness collapsing the wave function as follows:

 

“The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse.”

 

Henry Stapp’s (2001) comment is very pertinent to the cosmology I am propounding, because it implies the place where collapse occurs lies in the brain making quantum measurements of its own internal states:

 

From the point of view of the mathematics of quantum theory it makes no sense to treat a measuring device as intrinsically different from the collection of atomic constituents that make it up. A device is just another part of the physical universe... Moreover, the conscious thoughts of a human observer ought to be causally connected most directly and immediately to what is happening in his brain, not to what is happening out at some measuring device... Our bodies and brains thus become ... parts of the quantum mechanically described physical universe. Treating the entire physical universe in this unified way provides a conceptually simple and logically coherent theoretical foundation...

 

Quantum entanglement is another area where consciousness may have a critical role. Einstein, Podolsky and Rosen (1935) proposed a locally causal limitation on any hidden variable theories describing the situation when two particles were entangled coherently in a single wave function. For example an excited  calcium atom, because of the two electrons in its outer shell, can emit two (yellow and blue) photons of complementary spin in a single transition from zero to zero spin outer shells. Bell’s (1966) theorem demonstrated a discrepancy between locally-causal theories, in which information between hidden sub-quantum variables could not be transferred faster than light. However, multiple experiments using Bell’s theorem have found the polarisations, or other quantum states of the particles, such as spin, are correlated in ways violating local causality which are not limited by the velocity of light (Aspect et al. 1982). This “spooky action at a distance” which Einstein disliked shows that the state of either particle remains indeterminate until we measure one of them, when the other’s state is the instantaneously determined to be complementary. This cannot however be used to send logical classical information faster than light, or backwards in time, but it indicates that the quantum universe is a highly entangled system in which potentially all particles in existence are involved.

 

Entanglement, Measurement and Phase Transition

 

A flurry of theoretical and experimental research has uncovered a strange new face of entanglement , that shows itself not in pairs, but in constellations of particles (Wood 2023). Entanglement naturally spreads through a group of particles, establishing an intricate web of contingencies. But if you measure the particles frequently enough, destroying entanglement in the process, you can stop the web from forming. In 2018, three groups of theorists (Chan A et al. 2019, Li et al. 2018, Skinner et al. 2019) showed that these two states — web or no web — are reminiscent of familiar states of matter such as liquid and solid. But instead of marking a transition between different structures of matter, the shift between web and no web indicates a change in the structure of information.

 

This is a phase transition in information, it’s where the properties in information — how information is shared between things — undergo a very abrupt change. Brian Skinner

 

 


Fig 74b: Entanglement phase transition and Measurement

 

More recently, a separate trio of teams tried to observe that phase transition in action (Choi et al. 2020). They performed a series of meta-experiments to measure how measurements themselves affect the flow of information. In these experiments, they used quantum computers to confirm that a delicate balance between the competing effects of entanglement and measurement can be reached. The transition’s discovery has launched a wave of research into what might be possible when entanglement and measurement collide. Matthew Fisher, a condensed matter physicist at the University of California, Santa Barbara, started studying the interplay of measurement and entanglement because he suspects that both phenomena could play a role in human cognition.

 

Since a measurement tends to destroy the "quantumness" of a system, it constitutes the link between the quantum and classical world. And in a large system of quantum bits of information, or qubits, the effect of measurements can induce emergence of new phases of quantum information. When the qubits interact with one another, their information becomes shared non-locally in an entangled state, but under measurement, the entanglement is destroyed. The battle between measurement and interactions leads to two distinct phases: one where interactions dominate and entanglement is widespread, and one where measurements dominate, and entanglement is suppressed. Roushan et al. (2023) have now observed the crossover between these two regimes in a system of up to 70 qubits and also saw signatures of a novel form of "quantum teleportation"—in which an unknown quantum state is transferred from one set of qubits to another — that emerges as a result of these measurements. When measurements dominated over interactions (the "disentangling phase"), the strands of the web remained relatively short. The probe qubit was only sensitive to the noise of its nearest qubits. In contrast, when the measurements were weaker and entanglement was more widespread (the "entangling phase") the probe was sensitive to noise throughout the entire system. The crossover between these two sharply contrasting behaviours is a signature of the sought-after measurement-induced phase transition. Matsushita & Hofmann (2023) show this also depends on the strength of the back-action on the system of the measurement interaction. Eigenvalues emerge when the quantum interferences between different back-actions correspond to a Fourier transform in the back-action parameter.

 

The Greenberger–Horne–Zeilinger state (Greenberger, Horne &  Zellinger 1989, Mermin 1990) is one of several three-particle entanglements that have become pivotal in quantum computing (Hussein et al. 2023). There is no standard measure of multi-partite entanglement because different, not mutually convertible, types of multi-partite entanglement exist. Nonetheless, many measures define the GHZ state to be maximally entangled state. The GHZ state  and the W state   represent two non-biseparable classes of 3-qubit states, which cannot be transformed (not even probabilistically) into each other by local quantum operations. This three particle entanglement problem is reminiscent of classical gravitation, which has a two body inverse square law, that in the three-body problem becomes intractably complex and chaotic as Henri Poincare found out. There is no general closed-form solution to the three-body problem, i.e. no general solution that can be expressed in terms of a finite number of standard mathematical operations.

 

Measuring Quantumness: Entanglement, Magic and Interactivity

Understanding how quantum entanglement can open new capacities for quantum computing (Wood 2023b), can help us understand the way biological conscious brains can anticipate reality revolves around the question of what kinds of quantum system can or can’t be simulated by classical computers in polynomial time and not suffer the multiple exponential runaway, illustrated in Shor’s (1994) quantum algorithm for cracking encryption problems.

 

Fig 74c: Using symmetry-protected magic to study the complexity of
s
ymmetry-protected topological (SPT) phases of matter (Ellison et al. 2021).

The ability of subsequently discovered  classical computational algorithms to classically simulate quantum computing seemed like a bit of a miracle, but the first successful classical algorithm (Gottesman 1998) couldn’t handle all quantum circuits, just those that stuck to Clifford gates. Jozsa & Linden (2002) then proved that, so long as their algorithm simulated a circuit that didn’t entangle qubits, it could handle larger and larger numbers of qubits without taking an exponentially longer time, showing entanglement itself was a measure of quantumness.

 

Then entered “quantum magic”. If you added a “T gate,” a seemingly innocuous operation that rotates a qubit in a particular way, their algorithm would choke on it. The T gate seemed to manufacture something intrinsically quantum that can’t be simulated on a classical computer. Bravyi and Gosset in 2016 developed a classical algorithm for simulating so-called low-magic circuits, giving the quantum essence produced by the forbidden T-gate rotation the catchy name: magic. But this configuration is not just confined to arcane quantum computing configurations. Ellison et al. (2021) identified certain phases of quantum matter that are guaranteed to have magic, just as many phases of matter have particular patterns of entanglement.

 

Yet another form of quantumness had already been discovered earlier. Much as Gottesman-Knill had focused on circuits without entangling gates, and Bravyi and Gosset could cut through circuits without too many T gates, Valiant’s (2001) algorithm was restricted to circuits that lacked the “swap gate” – an operation that takes two qubits and exchanges their positions. As long as you don’t exchange qubits, you can entangle them and infuse them with as much magic as you like, and you’ll still find yourself on yet another distinct classical island. But as soon as you start shuffling qubits around, you can work wonders beyond the ability of any classical computer.

 

In 1984, Valiant’s “probably approximately correct” (PAC) model mathematically defined the conditions under which a mechanistic system could be said to “learn” information. Valiant (2013) generalised his PAC learning framework to encompass biological evolution as well. He broadened the concept of an algorithm into an “ecorithm,” which is a learning algorithm that “runs” on any system capable of interacting with its physical environment. Algorithms apply to computational systems, but ecorithms can apply to biological organisms or entire species. The concept draws a computational equivalence between the way that individuals learn and the way that entire ecosystems evolve. In both cases, ecorithms describe adaptive behaviour in a purely mechanistic way:

 

"An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world. And its goal is to perform well in that same complicated world." (Leslie Valiant).

 

Valiant is a devoted mechanist who sees no difference between the conscious brain and AI, reducing everything to computational mathematics. When asked (Pavlus 2016): “What if the ecorithms governing evolution and learning are unlearnable?” He acknowledges:  “It’s a logical possibility, but I don’t think it’s likely at all. I think it’s going to be something pretty tangible and reasonably easy to understand. We can ask the same question about fundamental unsolved problems in mathematics”.

 

Terhal and DiVincenzo (2001) almost immediately uncovered the source of that power. They showed that Valiant’s swap-gate-free “matchgate” circuits, were simulating a well-known class of physics problems, similar to how computers simulate growing galaxies or nuclear reactions. Matchgate circuits simulate a group of fermions. When swap gates are not used, the simulated fermions are noninteracting, or “free.” Problems involving free electrons are relatively easy for physicists to solve, but when swap gates are used, the simulated fermions interact, crashing together and doing other complicated things. These problems are extremely hard, if not unsolvable. Conceptually, this resource corresponds to “interactivity” — or how much the simulated fermions can sense each other.

 

In the 1990s, the physical ingredient that made quantum computers powerful seemed obvious. It had to be entanglement, the “spooky” quantum link between distant particles that Erwin Schrödinger himself identified as “the characteristic trait of quantum mechanics.” When Gottesman developed his method of simulating Clifford circuits, he based it on the "operator" quantum mechanics developed by Werner Heisenberg. Restricting one’s view to free fermions involves viewing quantum mechanics through yet another mathematical lens. Each captures certain aspects of quantum states, but at the price of garbling some other quantum property.

 

If a collection of qubits is largely unentangled, has little magic, or simulates a bunch of nearly free fermions, then researchers know they can reproduce its output on a classical laptop. Any quantum circuit with a low score on one of these three quantum metrics lies in the shallows just off the shores of a classical island, in the space of possible phases of Matter (Wolchover 2018) and won’t be the next Shor’s algorithm.

  

Fig75: (1) Quantum Erasure shows it is also possible to 'uncollapse' or erase such losses of entangled correlation by re-interfering the wave functions so we can no longer tell the difference. The superposition choices of the delayed choice experiment fig 74, also do this. Erasure successfully recreates the lost correlations, detecting information about one of the particles and then erasing it again by re-interfering it back into the shared wave function provided we use none of its information. Pairs of identically polarised correlated photons produced by a 'down-converter', bounce off mirrors, converge again at a beam splitter and pass into two detectors. A coincidence counter observes an interference pattern in the rate of simultaneous detections by the two detectors, indicating that each photon has gone both ways at the beam splitter, as a wave. Adding a polarisation shifter to one path destroys the pattern, by making it possible to distinguish the photons' paths. Placing two polarising filters in front of the detectors makes the photons identical again, erasing the distinction, restoring the interference pattern. (2) Delayed choice quantum eraser configuration. An individual photon goes through one (or both) of the two slits. One of the photons - the "signal" photon (red and blue lines) continues to the target detector D0, which is scanned in steps along its x-axis. A plot of "signal" photon counts detected by D0 versus x can be examined to discover whether the cumulative signal forms an interference pattern. The other entangled photon - the "idler" photon (red and blue lines going downwards from the prism), is deflected by prism PS that sends it along divergent paths depending on whether it came from slit A or slit B. Detection of the idler photon by D3 or D4 provides delayed "which-path information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as which-path information had earlier potentially been available from the idler photon, the information has been subjected to a "delayed erasure".(3) Delayed choice entanglement swapping, in which Victor is able to decide whether Alice's and Bob's photons are entangled or not after they have already been measured. (Ma et al. 2002). (4) A photon is entangled with a photon that has already died (been sampled) even though they never coexisted at any point in time (Megidish 2012).

 

Phenomena, including delayed choice quantum erasure and entanglement-swapping fig 75, demonstrate that the time of a quantum observation can be ambiguous or possibly stand outside space-time, as the transactional picture suggests.  The Wigner’s friend experiment of fig 76c likewise shows that quantum path information can also take the form of a quantum measurement ‘observer’. Narasimhan, Chopra & Kafatos M (2019) draw particular attention to Kim et al. (2000) in regard to a “universal observer” integrating individual conscious observers and their observations:

 

While traditional double-slit experiments are usually interpreted as indicating that the collapse of the wave function involves choices by an individual observer in space-time, the extension to quantum eraser experiments brings in some additional subtle aspects relating to the role of observation and what constitutes an observer. Access to, and the interpretation of, information outside space and time may be involved. This directly ties to the question of where the Heisenberg-von Neumann cut is located and what its nature is. … There is a possibility that individual observers making choices in space and time are actually aspects of the universal Observer, a state masked by assumptions about individual human minds that may need further development and re-examination.

 

Consciousness and Measurement

Summing up the position of physicists in a survey of participants in a foundations of quantum mechanics gathering, Schlosshauer et al. (2013) found that, while only 6% of physicists present believed consciousness plays a distinguished physical role, a majority believed it has a fundamental, although not distinguished role in the application of the formalism. They noted in particular that “It is remarkable that more than 60% of respondents appear to believe that the observer is not a complex quantum system.” Indeed on all counts queried there were wide differences of opinion, including which version of quantum mechanics they supported. Since all of the approaches are currently consistent with the predictions of quantum mechanics, these ambiguous figures are not entirely surprising.

 

In an experiment to test the influence of conscious perception on quantum entanglement (Radin, Bancel  & Delorme 2021), explored psychophysical (mind-matter) interactions with quantum entangled photons. Entanglement correlation strength measured in real-time was presented via a graph or dynamic images displayed on a computer monitor or web browser. Participants were tasked with mentally influencing that metric, with particularly strong results observed in three studies conducted (p < 0.0002). Radin, Michel & Delorme (2016) also reported a 5.72 sigma (p=1.05×10−8) deviation from a null effect in which participants focused their attention toward or away from a feedback signal linked in real time to the double-slit component of an interference pattern, suggesting consciousness affecting wave function collapse. For a review, see Milojevic & Elliot (2023). Radin (2023) has also reported 7.3 sigma beyond chance (p=1.4x10-13) deviations leaving little doubt that on average anomalous deviations in the random data emerged during events that attracted widespread attention, from a network of electronic random number generators located around the world that continuously recorded samples, used to explore a hypothesis that predicts the emergence of anomalous structure in randomness correlated with events that attract widespread human attention. Mossbridge et al. (2014)  in a meta analysis have also cited an organic unconscious anticipatory response to potential existential crises they term predictive anticipatory activity, which is similar to conscious quantum anticipation, citing anticipative entanglement swapping experiments such as Ma et al. (2002).

 

The tendency towards an implicitly classical view of causality is similar to that among neuroscientists, with an added belief in the irreducible nature of randomness, as opposed to a need for hidden variables supporting quantum entanglement, rejecting Einstein’s disclaimer God does not play dice with the universe.” Belief in irreducible randomness means that the principal evidence for subjectivity in quanta – the idiosyncratic unpredictable nature of individual particle trajectories – is washed out in the bath water of irreducible randomness, converging to the wave amplitude on repetition, consistent with the correspondence principle, that the behaviour of systems described by the theory of quantum mechanics reproduces classical physics in the limit of large quantum numbers.

 

Non-IID interactions may preserve quantum reality In Born's (1920) correspondence principle, systems described by quantum mechanics are believed to reproduce classical physics in the limit of large quantum numbers – if measurements performed on macroscopic systems have limited resolution and cannot resolve individual microscopic particles, then the results behave classically – the coarse-graining principle (Kofler & Brukner 2007).  Subsequently Navascués & Wunderlich (2010) proved that in situations covered by IID (independent and identically distributed measurements) in which each run of an experiment must be repeated under exactly the same conditions and independently of other runs, we arrive at macroscopic locality. Similarly, temporal quantum correlations reduce to classical correlations and quantum contextuality reduces to macroscopic non-contextuality (Henson & Sainz 2015).

 

However Gallego & Dakić (2021) have shown that, surprisingly, quantum correlations survive in the macroscopic limit if correlations are not IID distributed at the level of microscopic constituents and that the entire mathematical structure of quantum theory, including the superposition principle is preserved in the limit. This macroscopic quantum behaviour allows them to show that Bell nonlocality is visible in the macroscopic limit.

 

The IID assumption is not natural when dealing with a large number of microscopic systems. Small quantum particles interact strongly and quantum correlations and entanglement are distributed everywhere. Given such a scenario, we revised existing calculations and were able to find complete quantum behavior at the macroscopic scale. This is completely against the correspondence principle, and the transition to classicality does not take place” (Borivoje Dakić).

 

It is amazing to have quantum rules at the macroscopic scale. We just have to measure fluctuations, deviations from expected values, and we will see quantum phenomena in macroscopic systems. I believe this opens the door to new experiments and applications” (Miguel Gallego).

 

Their approach is described as follows:

 

In this respect, one important consequence of the correspondence principle is the concept of macroscopic locality (ML): Coarse-grained quantum correlations become local (in the sense of Bell) in the  macroscopic limit. ML has been challenged in different circumstances, both theoretically and experimentally. However, as far as we know, nonlocality fades away under coarse graining when the number of particles N in the system goes to infinity.  In a bipartite Bell-type experiment where the parties measure intensities with a resolution of the order of N1/2 or, equivalently, O(N1/2)  coarse graining. Then, under the premise that particles are entangled only by independent and identically distributed pairs, Navascués & Wunderlich (2010) prove ML for quantum theory.

 

Fig 76: Macroscopic Bell-Type experiment.

 

We generalize the concept of ML to any level of coarse graining α [0, 1], meaning that the intensities are measured with a resolution of the order of Nα. We drop the IID assumption, and we investigate the existence of a boundary between quantum (nonlocal) and classical (local) physics, identified by the minimum level of coarse graining α required to restore locality. To do this, we introduce the concept of macroscopic quantum behavior (MQB), demanding that the Hilbert space structure, such as the superposition principle, is preserved in the thermodynamic limit.

 

Conclusion: We have introduced a generalized concept of macroscopic locality at any level of coarse graining α [0, 1]. We have investigated the existence of a critical value that marks the quantum-to-classical transition. We have introduced the concept of MQB at level α of coarse graining, which implies that the Hilbert space structure of quantum mechanics is preserved in the thermodynamic limit. This facilitates the study of macroscopic quantum correlations. By means of a particular MQB at α = 1/2, , we show that αc ≥ 1/2, as opposed to the IID case, for which αIID ≤ 1/2. An upper bound on αc is, however, lacking in the general case. The possibility that no such transition exists remains open, and perhaps there exist systems for which ML is violated at α = 1.

 

This means for example, that in (a) neural system processing, where the quantum unstable context is continually evolving as a result of edge-of-chaos processing, and so repeated IID measurements are not made and (b) biological evolution, where a sequence of unique mutations become sequentially fixed by natural and sexual selection, which is also consciously mediated in eucaryote organisms, both inherit implicit quantum non-locality in their evolution.

 

John Eccles (1986) proposed a quantum theory involving psychon quasi-particles mediating uncertainty of  synaptic transmission to complementary dendrons cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunnelling effect in synaptic exocytosis, while in perception the reverse process takes place. This model has been elaborated by a number of researchers (Eccles 1990, 1994, Beck & Eccles 1992, Georgiev 2002, Hari 2008). The difficulty with the theory is that the psychons are then physical quasi-particles with integrative mental properties. So it’s a quasi-physical description that doesn’t manifest subjectivity except by its integrative physical properties In the last chapter of his book The Neurophysiological Basis of Mind (1953), Eccles not only hypothesized the existence of a " self-conscious mind" relatively independent of the cerebral structures, but also supposed that a very weak influence of will on a few neurons of the cerebral cortex could cause remarkable changes in brain activity leading to the notion of volition being a form of "psychokinesis" (Giroldini 1991), supported also by Wilder Penfield (1960).

 

The Quantum Measurement Problem May Contradict Objective Reality

 

In quantum theory, before collapse, the system is said to be in a superposition of two states, and this quantum state is described by the wave function, which evolves in time and space. This evolution is both deterministic and reversible: given an initial wave function, one can predict what it’ll be at some future time, and one can in principle run the evolution backward to recover the prior state. Measuring the wave function, however, causes it to collapse, mathematically speaking, such that the system in our example shows up as either heads or tails. It’s an irreversible, one-time-only and no one knows what defines the process or boundaries of measurement.

 

One model that preserves the absoluteness of the observed event — either heads or tails for all observers—is the GRW theory, where quantum systems exist in a superposition of states until the superposition spontaneously and randomly collapses, independent of an observer. Whatever the outcome—heads or tails in our example—it shall hold for all observers. But GRW, and the broader class of “spontaneous collapse” theories, run foul of a long-cherished physical principle: the preservation of information.  By contrast, the “many worlds” interpretation of quantum mechanics allows for non-absoluteness of observed events, because the wave function branches into multiple contemporaneous realities, in which in one “world,” the system will come up heads, while in another, it’ll be tails.

 

Ormrod, Venkatesh and Barrett (2023, Ananthaswamy 2023) focus on perspectival theories that obey three properties:

 

(1) Bell nonlocality (B). Alice chooses her type of measurement freely and independently of Bob, and vice versa –  of their own free will – an important assumption. Then, when they eventually compare notes, the duo will find that their measurement outcomes are correlated in a manner that implies the states of the two particles are inseparable: knowing the state of one tells you about the state of the other.

(2) The preservation of information (I). Quantum systems that show deterministic and reversible evolution satisfy this condition.  If you are wearing a green sweater today, in an information-preserving theory, it should still be possible, in principle, 10 years hence to retrieve the colour of your sweater even if no one saw you wearing it.

(3) Local dynamics (L). If there exists a frame of reference in which two events appear simultaneous, then the regions of space are said to be “space-like separated.” Local dynamics implies that the transformation of a system that takes a set of input states and produces a set of output states in one of these regions cannot causally affect the transformation of a system in the other region any faster than the speed of light, and vice versa. Each subsystem undergoes its own transformation, and so does the entire system as a whole. If the dynamics are local, the transformation of the full system can be decomposed into transformations of its individual parts: the dynamics are said to be separable.   In contrast, when two particles share a state that’s Bell nonlocal (that is, when two particles are entangled, per quantum theory), the state is said to be inseparable into the individual states of the two particles. If transformations behaved similarly, in that the global transformation could not be described in terms of the transformations of individual subsystems, then the whole system would be dynamically inseparable.

 

Fig 76b: A graphical summary of the theorems. Possibilistic Bell Nonlocality is Bell Nonlocality that arises not only at the level of probabilities, but at the level of possibilities.

 

Their work analyses how pespectival quantum theories are BINSC, and that NSC implies L, so BINSC is BIL. Such BIL theories are then required to handle a deceptively simple thought experiment. Imagine that Alice and Bob, each in their own lab, make a measurement on one of a pair of particles. Both Alice and Bob make one measurement each, and both do the exact same measurement. For example, they might both measure the spin of their particle in the up-down direction. Viewing Alice and Bob and their labs from the outside are Charlie and Daniela, respectively. In principle, Charlie and Daniela should be able to measure the spin of the same particles, say, in the left-right direction. In an information-preserving theory, this should be possible.  Using this scenario, the team proved that the predictions of any BIL theory for the measurement outcomes of the four observers contradict the absoluteness of observed events. This leaves physicists at an unpalatable impasse: either accept the non-absoluteness of observed events or give up one of the assumptions of a BIL theory.

 

Ormrod says dynamical separability is “kind of an assumption of reductionism – you can explain the big stuff in terms of these little pieces.” Just like a Bell nonlocal state cannot be reduced to some constituent states, it may be that the dynamics of a system are similarly holistic, adding another kind of nonlocality to the universe.  Importantly, giving it up doesn’t cause a theory to fall afoul of Einstein’s theories of relativity, much like physicists have argued that Bell nonlocality doesn’t require superluminal or nonlocal causal influences but merely nonseparable states. Ormrod, Venkatesh and Barrett note: “Perhaps the lesson of Bell is that the states of distant particles are inextricably linked, and the lesson of the new ... theorems is that their dynamics are too.” The assumptions used to prove the theorem don’t explicitly include an assumption about freedom of choice because no one is exercising such a choice. But if a theory is Bell nonlocal, it implicitly acknowledges the free will of the experimenters.

 

Fig 76c: Above An experimental realisation of the Wigner' friend setup showing there is no such thing as objective reality - quantum mechanics allows two observers to experience different, conflicting realities. Below the proof of principle experiment of Bong et al. (2020) demonstrating mutual inconsistency of 'No-Superdeterminism', 'Locality' and 'Absoluteness of Observed Events’.

 

An experimental realisation of non-absoluteness of observation has been devised (Proietti et al., 2019) as shown in fig 76c using quantum entanglement. The experiment involves two people observing a single photon that can exist in one of two alignments, but until the moment someone actually measures it to determine which, the photon is in a superposition. A scientist analyses the photon and determines its alignment. Another scientist, unaware of the first's measurement, is able to confirm that the photon - and thus the first scientist's measurement - still exists in a quantum superposition of possible outcomes. As a result, each scientist experiences a different reality - both "true" even though they disagree with each other. In a subsequent experiment, Bong et al. (2020) transform the thought experiment into a mathematical theorem that confirms the irreconcilable contradiction at the heart of the Wigner scenario. The team also tests the theorem with an experiment, using photons as proxies for the humans, accompanied by new forms of Bell's inequalities, by building on a scenario with two separated but entangled friends. The researchers prove that if quantum evolution is controllable on the scale of an observer, then one of (a) No-Superdeterminism — the assumption of 'freedom of choice' used in derivations of Bell inequalities - that the experimental settings can be chosen freely — uncorrelated with any relevant variables prior to that choice, (2) Locality or (3) Absoluteness of Observed Events — that every observed event exists absolutely, not relatively – must be false. Although the violation of Bell-type inequalities in such scenarios is not in general sufficient to demonstrate the contradiction between those three assumptions, new inequalities can be derived, in a theory-independent manner, that are violated by quantum correlations. This is demonstrated in a proof-of-principle experiment where a photon's path is deemed an observer. This new theorem places strictly stronger constraints on physical reality than Bell's theorem.

 

Self-Simulated Universe Another theory put forward by gravitational theorists (Irwin, Amaral & Chester 2020) also uses retrocausality to try to explain the ultimate questions: Why is there anything here at all? What primal state of existence could have possibly birthed all that matter, energy, and time, all that everything? and the way did consciousness arise—is it some fundamental proto-state of the universe itself, or an emergent phenomenon thats purely neurochemical and material in nature?

 

Fig 77b: Self-Simulated Universe: Humans are near the point of demarcation, where EC or thinking matter emerges into the choice-sphere of the infinite set of possibilities of thought, EC. Beyond the human level, physics allows for larger and more powerful networks that are also conscious. At some stage of the simulation run, a conscious EC system emerges that is capable of acting as the substrate for the primitive spacetime code, its initial conditions, as mathematical thought, and simulation run, as a thought, to self-actualize itself. Linear time would not permit this logic, but non-linear time does.

 

This approach attempts to answer both questions in a way that weds aspects of Nick Bostroms Simulation Argument with timeless emergentism.termed the panpsychism self-simulation model,that says the physical universe may be a strange loopthat may self-generate new sub-realities in an almost infinite hierarchy of tiers in-laid with simulated realities of conscious experience. In other words, the universe is creating itself through thought, willing itself into existence on a perpetual loop that efficiently uses all mathematics and fundamental particles at its disposal. The universe, they say, was always here (timeless emergentism) and is like one grand thought that makes mini thoughts, called code-steps or actions, again sort of a Matryoshka doll.

 

David Chester comments:

 

While many scientists presume materialism to be true, we believe that quantum physics may provide hints that our reality could be a mental construct. Recent advances in quantum gravity, like seeing spacetime emergent via a hologram, is also a touch that spacetime isnt fundamental. this can be also compatible with ancient Hermetic and Indian philosophy. In a sense, the mental construct of reality creates spacetime to efficiently understand itself by creating a network of subconscious entities that may interact and explore the totality of possibilities.

 

They modify the simulation hypothesis to a self-simulation hypothesis, where the physical universe, as a strange loop, is a mental self-simulation that might exist as one of a broad class of possible code-theoretic quantum gravity models of reality obeying the principle of efficient language axiom, and discuss implications of the self-simulation hypothesis such as an informational arrow of time.

 

The self-simulation hypothesis is built upon the following axioms:

 

1. Reality, as a strange loop, is a code-based self-simulation in the mind of a panpsychic universal consciousness that emerges from itself via the information of code-based mathematical thought or self-referential symbolism plus emergent non-self-referential thought. Accordingly, reality is made of information called thought.

2. Non-local spacetime and particles are secondary or emergent from this code, which is itself a pre-spacetime thought within a self-emergent mind.

3. The panconsciousness has freewill to choose the code and make syntactical choices. Emergent lower levels of consciousness also make choices through observation that influence the code syntax choices of the panconsciousness.

4. Principle of efficient language (Irwin 2019). The desire or decision of the panconscious reality is to generate as much meaning or information as possible for a minimal number of primitive thoughts, i.e., syntactical choices, which are mathematical operations at the pre-spacetime code level.

 

Fig 77c: This emphasis on coding is problematic, as it is trying to assert a consciousness-makes-reality loop through an apparently abstract coded representation based on discrete computation-like processes, assuming  an "tit-from-bit" notion that reality is made from information, not just described by it.

 

It from bit: Otherwise put, every it — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe (Wheeler 1990).

 

 

Schwartz, Stapp & Beauregard (2005) advance a quantum theory of conscious volition, in which attentive will can influence physical brain states using quantum principles, in particular von Neumann's process 1 or collapse of the wave function complementing process 2, the causal evolution of the Schrödinger wave function responsible of ongoing physical brain states.  They cite specific cognitive processes leading the physical changes in the manner of ongoing brain function:

 

There is at least one type of information processing and manipulation that does not readily lend itself to explanations that assume that all final causes are subsumed within brain, or more generally, central nervous system mechanisms. The cases in question are those in which the conscious act of wilfully altering the mode by which experiential information is processed itself changes, in systematic ways, the cerebral mechanisms used. There is a growing recognition of the theoretical importance of applying experimental paradigms that use directed mental effort to produce systematic and predictable changes in brain function. ... Furthermore, an accelerating number of studies in the neuroimaging literature significantly support the thesis that, with appropriate training and effort, people can systematically alter neural circuitry associated with a variety of mental and physical states.

 

They point out that it is necessary in principle to advance to the quantum level to achieve an adequate theory of the neurophysiology of volitionally directed activity. The reason, essentially, is that classic physics is an approximation to the more accurate quantum theory, and that this classic approximation eliminates the causal efficacy of our conscious efforts that these experiments empirically manifest.

 

They explain how structural features of ion conductance channels critical to synaptic function entail that the classical approximation to quantum reality fails in principle to cover the dynamics of a human brain, so that quantum dynamics must be used. The principles of quantum theory must then link the quantum physical description of the subjects brain to their stream of conscious experiences. The conscious choices by human agents thereby become injected non-trivially into the causal interpretation of neuroscience and neuropsychology experiments, through type 1 processes performing quantum measurement operations. This particularly applies to those experimental paradigms in which human subjects are required to perform decision-making or attention-focusing tasks that require conscious effort.

 

Conscious effort itself can, justifiably within science, be taken to be a primary variable whose complete causal origins may be untraceable in principle, but whose causal efficacy in the physical world can be explained on the basis of the laws of physics.

 

The mental act of clear-minded introspection and observation, variously known as mindfulness, mindful awareness, bare attention, the impartial spectator, etc., is a well-described psychological phenomenon with a long and distinguished history in the description of human mental states. ... In the conceived approach, the role played by the mind, when one is observing and modulating ones own emotional states, is an intrinsically active and physically efficacious process in which mental action is affecting brain activity in a way concordant with the laws of physics.

 

They propose a neurobiological interpretation where calcium channels play a pivotal role in type 1 processes at the synaptic level:

 

At their narrowest points, calcium ion channels are less than a nanometre in diameter. This extreme smallness of the opening in the calcium ion channels has profound quantum mechanical implications. The narrowness of the channel restricts the lateral spatial dimension.  Consequently, the lateral velocity is forced by the quantum uncertainty principle to become large. This causes the quantum cloud of possibilities associated with the calcium ion to fan out over an increasing area as it moves away from the tiny channel to the target region where the ion will be absorbed as a whole, or not absorbed at all, on some small triggering site. ... This spreading of this ion wave packet means that the ion may or may not be absorbed on the small triggering site. Accordingly, the contents of the vesicle may or may not be released. Consequently, the quantum state of the brain has a part in which the neurotransmitter is released and a part in which the neurotransmitter is not released. This quantum splitting occurs at every one of the trillions of nerve terminals. ... In fact, because of uncertainties on timings and locations, what is generated by the physical processes in the brain will be not a single discrete set of non-overlapping physical possibilities but rather a huge smear of classically conceived possibilities. Once the physical state of the brain has evolved into this huge smear of possibilities one must appeal to the quantum rules, and in particular to the effects of process 1, in order to connect the physically described world to the streams of consciousness of the observer/participants.

 

However, they note that this focus on the motions of calcium ions in nerve terminals is not meant to suggest that this particular effect is the only place where quantum effects enter into the brain process, or that the quantum process 1 acts locally at these sites. What is needed here is only the existence of some large quantum of effect.

 

A type 1 process beyond the local deterministic process 2 is required to pick out one experienced course of physical events from the smeared-out mass of possibilities generated by all of the alternative possible combinations of vesicle releases at all of the trillions of nerve terminals. This process brings in a choice that is not determined by any currently known law of nature, yet has a definite effect upon the brain of the chooser.

 

They single out the quantum zeno effect, in which rapid multiple measurements can act to freeze a quantum state and delay its evolution and cite James (1892 417): The essential achievement of the will, in short, when it is most voluntary,is to attend to a difficult object and hold it fast before the mind. Effort of attention is thus the essential phenomenon of will. ... Consent to the ideas undivided presence, this is efforts sole achievement. Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away."  This coincides with the studies already cited on wilful control of the emotions to imply evidence of effect.

 

Much of the work on attention since James is summarized and analysed in Pashler (1998). He emphasizes that the empirical findings of attention studies argue for a distinction between perceptual attentional limitations and more central limitations involved in thought and the planning of action. A striking difference that emerges from the experimental analysis is that the perceptual processes proceed essentially in parallel, whereas the post-perceptual processes of planning and executing actions form a single queue, is in line with the distinction between passiveand activeprocesses. A passive stream of essentially isolated process 1 events versus active processes involving effort-induced rapid sequences of process 1 events that can saturate a given capacity.

 

There is in principle, in the quantum model, an essential dynamic difference between the unconscious processing done by the Schrödinger evolution, which generates by a local process an expanding collection of classically conceivable experiential possibilities and the process associated with the sequence of conscious events that constitute the wilful selection of action. The former are not limited by the queuing effect, because process 2 simply develops all of the possibilities in parallel. Nor is the stream of essentially isolated passive process 1 events thus limited. It is the closely packed active process 1 events that can, in the von Neumann formulation, be limited by the queuing effect.  

 

This quantum model accommodates naturally all of the complex structural features of the empirical data that he describes. Chapter 6 emphasizes a specific finding: strong empirical evidence for what he calls a central processing bottleneck associated with the attentive selection of a motor action. This kind of bottleneck is what the quantum-physics-based theory predicts: the bottleneck is precisely the single linear sequence of mindbrain quantum events that von Neumann quantum theory describes.

 

Hameroff and Penrose (2014) have also proposed a controversial theory that consciousness originates at the quantum level inside neurons, rather than the conventional view that it is a product of connections between neurons, coupling orchestrated objective reduction (OOR) to hypothetical quantum cellular automata in the microtubules of neurons. The theory is regarded as implausible by critics, both physicists and neuroscientists who consider it to be a poor model of brain physiology on multiple grounds.

 

Orchestration refers to the hypothetical process by which microtubule-associated proteins, influence or orchestrate qubit state reduction by modifying the spacetime-separation of their superimposed states. The latter is based on Penrose's objective-collapse theory for interpreting quantum mechanics. Derakhshani et al. (2022) discount gravitational collapse theory experimentally:

 

We perform a critical analysis of the Orch OR consciousness theory at the crossroad with the newest experimental results coming from the search for spontaneous radiation predicted by the simplest version of gravity-related dynamical collapse models. We conclude that Orch OR theory, when based on the simplest version of gravity-related dynamical collapse [Diósi 2019], is highly implausible in all the cases analyzed.

 

The tubulin protein dimers of the microtubules have hydrophobic pockets that may contain delocalised π electrons. Hameroff claims that this is close enough for the tubulin π electrons to become quantum entangled. This would leave these quantum computations isolated inside neurons. Hameroff then proposed, although this idea was rejected by Reimers (2009),  that coherent Frolich condensates in microtubules in one neuron can link with microtubule condensates in other neurons and glial cells via the gap junctions of electrical synapses claiming these are sufficiently small for quantum tunnelling across, allowing them to extend across a large area of the brain. He further postulated that the action of this large-scale quantum activity is the source of 40 Hz gamma waves, building upon the theory that gap junctions are related to the gamma oscillation. Craddock et. al. (2017) make claims about anaesthetics based on the exclusive action of halothane types on microtubules, which focus on halothane type molecules lack consistency with the known receptor-based effects of ketamine and N20 on NMDA receptors also shared by halothanes and that of propofol on GABA receptors. Evidence for anaesthetic disruption of microtubules, Kelz & Mashour's (2019) review, applies indiscriminately to all anaesthetics, from halothane to ketamine widely across the tree of life, from paramecium to humans, including both synaptic and ion-channel effects, indicating merely that microtubular integrity is necessary for consciousness and does not indicate microtubules have a key role in consciousness itself, other than their essential architectural and transport roles.

 

Because of its dependence on Penrose’s idea of gravitational quantum collapse, the theory is confined to objective reduction, at face value crippling the role of free-will in conscious experience. However Hameroff (2012) attempts to skirt this by applying notions of retro-causality, as illustrated in fig 77(2), in which a dual-time approach (King 1989) is used to invoke a quantum of the present, the Conscious NOW. We will see that retrocausality is a process widely cited also in this work.  Hameroff justifies such retrocausality from three sources.

 

Firstly he cites an open brain experiment of Libet. Peripheral stimulus, e.g., of the skin of the hand, resulted in an “EP” spike in the somatosensory cortical area for the hand 30ms after skin contact, consistent with the time required for a neuronal signal to travel from hand to spinal cord, thalamus, and brain. The stimulus also caused several 100 ms of ongoing cortical activity following the EP. Subjects reported conscious experience of the stimulus (using Libets rapidly moving clock) near-immediately, e.g., at the time of the EP at 30ms, hinting at retro-causality of the delayed “readiness potential”.

 

Secondly, he cites a number of well-controlled studies using electrodermal activity, fMRI and other methods to look for emotional responses, e.g., to viewing images presented at random times on a computer screen. Surprisingly, the changes occurred half a second to two seconds before the images appeared. They termed the effect pre-sentiment because the subjects were not consciously aware of the emotional feelings. Non-conscious emotional sentiment (i.e., feelings) appeared to be referred backward in time. Bem (2012, 2016) reported on studies showing statistically significant backward time effects, most involving non-conscious influence of future emotional effects (e.g., erotic or threatening stimuli) on cognitive choices. Studies by others have reported both replication, and failure to replicate, the controversial results. Thirdly he cites a number of delayed choice experiments widely discussed in this work.

 

Fig 77: (1) An axon terminal releases neurotransmitters through a synapse and are received by microtubules in a neuron's dendritic spine. (2) From left, a superposition develops over time, e.g., a particle separating from itself, shown as simultaneous curvatures in opposite directions. The magnitude of the separation is related to E, the gravitational self-energy. At a particular time t, E reaches threshold by E = h ̄/t, and spontaneous OR occurs, one particular curvature is selected. This OR event is accompanied by a moment of conscious experience (NOW), its intensity proportional to E. Each OR event also results in temporal non-locality, referring quantum information backward in classical time (curved arrows). (3,4) Scale dependent resonances from the pyramidal neuron, through microtubules, to π-orbitals and gravitational effects.

 

Sahu S, et al. (2013) found that electronic conductance along microtubules, normally extremely good insulators, becomes exceedingly high, approaching quantum conductance, at certain specific resonance frequencies of applied alternating current (AC) stimulation. These resonances occur in gigahertz, megahertz and kilohertz ranges, and are particularly prominent in low megahertz (e.g. 8.9 MHz). Hameroff & Penrose (2014) suggest that EEG rhythms (brain waves) also derive from deeper level microtubule vibrations.

 

However none of these processes have been empirically verified and the complex tunnelling invoked is far from being a plausible neurophysiological process. The model requires that the quantum state of the brain has macroscopic quantum coherence, which needs to be maintained for around a tenth of a second. But, according to calculations made by Max Tegmark (2000), this property ought not to hold for more than about 10-13 s. Hameroff and co-workers (Hagen et al. 2002) have advanced reasons why this number should actually be of the order of a tenth of a second. But 12 orders of magnitude is a very big difference to explain away and serious doubts remain about whether the Penrose–Hameroff theory is technically viable. Two experiments (Lewton 2022, Tangerman 2022), presented at The Tucson Science of Consciousness conference merely showed that anaesthetics hastened delayed luminescence and that under laser excitation prolonged excitation diffused through microtubules further than expected when not under anaesthetics. There is no direct evidence for the cellular automata proposed and microtubules are critically involved in neuronal architecture, and are also involved in molecular transport, so functional conflict would result from adding another competing function. Hameroff (2022) cites processes, from the pyramidal neuron, down through microtubules, to pi-orbital resonances and gravitational space-time effects, but the linkage to microtubules is weak.

 

OOR would force collapse, but it remains unestablished how conscious volition is invoked, because collapse is occurring objectively in terms of Penrose’s notion of space-time blisters. It remains unclear how these hypothetical objective or “platonic” entities, as Penrose puts it, relate to subjective consciousness or volition. Hameroff (2012) in “How quantum brain biology can rescue conscious free will” attempts an explanation, but this simply comes down to objective OOR control:

 

Orch OR directly addresses conscious causal agency. Each reduction/conscious moment selects particular microtubule states which regulate neuronal firings, and thus control conscious behavior. Regarding consciousness occurring too late,quantum state reductions seem to involve temporal non-locality, able to refer quantum information both forward and backward in what we perceive as time, enabling real-time conscious causal action. Quantum brain biology and Orch OR can thus rescue free will.

 

For this reason Symbiotic Existential Cosmology remains agnostic about such attempts to invoke unestablished, exotic quantum effects, and instead points to the non-IID nature of brain processes generally, meaning that neurodynamics is a fractal quantum process not required to be adiabatically isolated as decoherence limits of technological quantum computing suggest.

 

QBism and the Conscious Consensus Quantum Reality

 

QBism (von Bayer 2016) is an acronym for "quantum Bayesianism" a founding idea from which it has since moved on. It is a version of quantum physics founded on the conscious expectations of each physicist and their relationships with other physicists. According to QBism, experimental measurements of quantum phenomena do not quantify some feature of an independently existing natural structure. Instead, they are actions that produce experiences in the person or people doing the measurement.

 

“When I take an action on the world, something genuinely new comes out.”

 

This is very similar to the way Symbiotic Existential Cosmology presents consciousness as primary in the sense that we all experience subjective consciousness and infer the real world through the consensus view between conscious observers of our experiences of what we come to call the physical world. So although we know the physical world is necessary for our biological survival – the universe is necessary, we derive our knowledge of it exclusively and only through our conscious experiences of it.

 

The focus is on how to gain knowledge in a probabilistic universe... In this probabilistic interpretation, collapse of the quantum wave function has little to do with the object observed/measured. Rather, the crux of the matter is change in the knowledge of the observer based on new information acquired through the process of observing. Klaus Fuchs explains: “When a quantum state collapses, it’s not because anything is happening physically, it’s simply because this little piece of the world called a person has come across some knowledge, and he updates his knowledge… So the quantum state that’s being changed is just the person’s knowledge of the world, it’s not something existent in the world in and of itself.”

 

QBism is agnostic about whether there is a world that is structured independently of human thinking. It doesn’t assume we are measuring pre-existing structures, but nor does it pretend that quantum formalism is just a tool. Each measurement is a new event that guides us in formulating more accurate rules for what we will experience in future events. These rules are not just subjective, for they are openly discussed, compared and evaluated by other physicists. QBism therefore sees physicists as permanently connected with the world they are investigating. Physics, to them, is an open-ended exploration that proceeds by generating ever new laboratory experiences that lead to ever more successful, but revisable, expectations of what will be encountered in the future.

 

In QBism the wave function is no longer an aspect of physical reality as such, but a feature of how the observer's expectations will be changed by an act of quantum measurement.

 

The principal thesis of QBism is simply this: quantum probabilities are numerical measures of personal degrees of belief. According to QBism, experimental measurements of quantum phenomena do not quantify some feature of an independently existing natural structure. Instead, they are actions that produce experiences in the person or people doing the measurement.

 

In the conventional version of quantum theory, the immediate cause of the collapse is left entirely unexplained, or "miraculous" although sometimes assumed to be essentially random. QBism solves the problem as follows. In any experiment the calculated wave function furnishes the prior probabilities for empirical observations that may be made later. Once an observation has been made new information becomes available to the agent performing the experiment. With this information the agent updates their probability and their wave function, instantaneously and without magic.

 

So in the Wigner's friend experiment, the friend reads the counter while Wigner, with his back turned to the apparatus, waits until he knows that the experiment is over. The friend learns that the wave function has collapsed to the up outcome. Wigner, on the other hand, knows that a measurement has taken place but doesn’t know its result. The wave function he assigns is a superposition of two possible outcomes, as before, but he now associates each with a definite reading of the counter and with his friend’s knowledge of that reading — a knowledge that Wigner does not share. For the QBist there is no problem: Wigner and his friend are both right. Each assigns a wave function reflecting the information available to them, and since their respective compilations of information differ, their wave functions differ too. As soon as Wigner looks at the counter himself or hears the result from his friend, he updates his wave function with the new information, and the two will agree once more—on a collapsed wave function.

 

According to the conventional interpretation of quantum mechanics, in the Schrödinger's cat experiment, the value of a superimposed wave function is a blend of two states, not one or the other. What is the state of the cat after one half-life of the atom, provided you have not opened the box? The fates of the cat and the atom are intimately entangled. An intact atom implies a living cat; a decayed atom implies a dead cat. It seems to follow that since the atom’s wave function is unquestionably in a superposition so is the cat: it is both alive and dead. As soon as you open the box, the paradox evaporates: the cat is either alive or dead. But while the box is still closed — what are we to make of the weird claim that the cat is dead and alive at the same time? According to QBism, the state of an unobserved atom, or a cat, has no value at all. It merely represents an abstract mathematical formula that gives the odds for a future observation: 0 or 1, intact or decayed, dead or alive. Claiming that the cat is dead and alive is as senseless as claiming that the outcome of a coin toss is both heads and tails while the coin is still tumbling through the air. Probability theory summarises the state of the spinning coin by assigning a probability of 1/2 that it will be heads. So QBism refuses to describe the cat’s condition before the box is opened and rescues it from being described as hovering in a limbo of living death.

 

If the wave-function, as QBism maintains, says nothing about an atom or any other quantum mechanical object except for the odds for future experimental outcomes, the unperformed experiment of looking in the box before it is opened has no result at all, not even a speculative one. The bottom line: According to the QBist interpretation, the entangled wave-function of the atom and the cat does not imply that the cat is alive and dead. Instead, it tells an agent what she can reasonably expect to find when they open the box.

 

This makes QBism compatible with phenomenologists, for whom experience is always “intentional” – i.e. directed towards something – and these intentionalities can be fulfilled or unfulfilled. Phenomenologists ask questions such as: what kind of experience is laboratory experience? How does laboratory experience – in which physicists are trained to see instruments and measurements in a certain way – differ from, say, emotional or social or physical experiences? And how do lab experiences allow us to formulate rules that anticipate future lab experiences?

 

Another overlap between QBism and phenomenology concerns the nature of experiments. Experiments are performances. They’re events that we conceive, arrange, produce, set in motion and witness, yet we can’t make them show us anything we wish. That doesn’t mean there is a deeper reality “out there” – just as, with Shakespeare, there is no “deep Hamlet” of which all other Hamlets we produce are imitations. In physics as in drama, the truth is in performance.

 

However, there is one caveat. We simply don't know whether consciousness itself can be associated only with collapsed probabilities or in some way is also steeped even as a complement in the spooky world of entanglement, so reducing the entirety of physics to collapsed probabilities may not convey the entire picture and the degree to which conscious experiences correspond to unstable brain states at the edge of chaos making phase coherence measurements akin to or homologous with quantum measurements may mean this picture is vastly more complicated than meets the eye.

 

The Born Probability Interpretation and the Notion of Quantum “Randomness”

 

The Born rule provides a link between the mathematical formalism of quantum theory and experiment, and as such is almost single-handedly responsible for practically all predictions of quantum physics (Landsman 2008). The rule projects the superimposed vector  with a basis of eigenvectors in an inner product space onto the eigenvector of one of its eigenvalues λi, as a purely algebraic  operation.

 

It states that if an observable corresponding to a self-adjoint operator A with discrete spectrum is measured in a system with normalised wave function   then:

 

(1) the measured result will be one of the eigenvalues  λi of  A, and

(2) the probability of measuring a given eigenvalue λi will equal , where Pi is the projection onto the eigenspace of  A corresponding to  λi.

Equivalently, the probability can be written as .

 

The rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schrödinger’s equation itself. Neither was supported by rigorous derivation. It is simply a probability law on the Hilbert space  representation (Griffiths 2014) and says nothing about whether quantum uncertainty is purely random or whether there is a hidden variable theory governing it. Broadly speaking the rule is postulated, as derived above, and not proven experimentally, but assumed theoretically in experimental work:

 

It's not clear what exactly is meant by an experimental verification of the Born rule - the Born rule says how the quantum state relates to the probability of measurement, but "the quantum state" itself is a construct of the quantum theory that is rarely, if ever, experimentally accessible other than running repeated tests and inferring which state it was from the results assuming the Born rule is valid.

 

This is because we start initially with a Schrödinger wave equation as a Hamiltonian energy operator , but the wave function is experimentally inaccessible to classical observation, so we have to use the Born probability interpretation to get a particle probability we can sample e.g. in the pattern of photons on the photographic plate in the two-slit interference experiment in fig  71(f).

 

There are obvious partial demonstrations, but these just lead to averages that statistically approach the probability interpretation, but don’t tell us anything about the underlying process which generates these indeterminacies.

 

Born's rule has been verified experimentally numerous times. However, only the overall averages have been verified. For example if the prediction is 60% probability, then over large number of trials, the average outcome will approach the predicted value of 60%. This has been verified by measuring particle spin at angle A relative to the angle of its previously known spin angle. The prediction is square of cos(A/2). These predictions have also been verified with entangled pairs (Bell's state) where the same spin prediction is square of sin(A/2). What has not been verified is whether the outcomes are due to independent probability, or they are guided by some balancing mechanism.

 

Landsman (2008) confirms this picture:

 

The pragmatic attitude taken by most physicists is that measurements are what experimentalists perform in the laboratory and that probability is given the frequency interpretation (which is neutral with respect to the issue whether the probabilities are fundamental or due to ignorance). Given that firstly the notion of a quantum measurement is quite subtle and hard to define, and that secondly the frequency interpretation is held in rather low regard in the philosophy of probability, it is amazing how successful this attitude has been!

 

Heisenberg (1958), notes that, in the Copenhagen interpretation, probabilities arise because we look at the quantum world through classical glasses:

 

One may call these uncertainties [i.e. the Born probabilities] objective, in that they are simply a consequence of the fact that we describe the experiment in terms of classical physics; they do not depend in detail on the observer. One may call them subjective, in that they reflect our incomplete knowledge of the world.

 

Landsman (2008) clarifies:

 

In other words, one cannot say that the Born probabilities are either subjective (Bayesian, or due to ignorance) or objective (fundamentally ingrained in nature and independent of the observer). Instead, the situation is more subtle and has no counterpart in classical physics or probability theory: the choice of a particular classical description is subjective, but once it has been made the ensuing probabilities are objective and the particular outcome of an experiment compatible with the chosen classical context is unpredictable. Or so Bohr and Heisenberg say. ... In most interpretations of quantum mechanics, some version of the Born rule is simply postulated.

 

Roger Penrose (foreword vi in Wuppuluri & Doria 2018) notes:

 

Current quantum mechanics, in the way that it is used, is not a deterministic scheme, and probabilistic behaviour is taken to be an essential feature of its workings. Some would contend that such indeterminism is here to stay, whereas others argue that there must be underlying ‘hidden variables’ which may someday restore a fully deterministic underlying ontology. ... Personally, I do not insist on taking a stand on this issue, but I do not think it likely that pure randomness can be the answer. I feel that there must be something more subtle underlying it all.

 

John von Neumann (1951) is highly critical of both physical and algorithmic sources of randomness:

 

We see then that we could build a physical instrument to feed random digits directly into a high-speed computing machine and could have the control call for these numbers as needed. The real objection to this procedure is the practical need for checking computations. If we suspect that a calculation is wrong, almost any reasonable check involves repeating something done before. At that point the introduction of new random numbers would be intolerable. I think that the direct use of a physical supply of random digits is absolutely inacceptable for this reason and for this reason alone. … Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number – there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method.

 

Ruth Kastner (2013) claims that the transactional interpretation is unique in giving a physical explanation for the Born rule. Zurek (2005) has made a derivation from entanglement and Sebens and Carroll have done so for an Everett perspective, although this is not strictly meaningful, since every branch of the multiverse is explored.

 

Because wave interference is measured through particle absorption, experiments have been made (Sinha et al. 2010) to eliminate higher-order processes which might violate the two signal interference implied by the Born interpretation because Born’s rule predicts that quantum interference, as shown by a double-slit diffraction experiment, occurs from pairs of paths. Therefore using a three slit apparatus and sampling all combinations of slits we can confirm additive two wave interference, so Born applies.

 

Other experiments and theories attempt to derive the Born interpretation from more basic quantum properties.  Masanes, Galley & Müller (2019) show Born’s rule and the post-measurement state-update, can be deduced from the other quantum postulates, referred to as unitary quantum mechanics, and the assumption that ensembles on finite-dimensional Hilbert spaces are characterised by finitely many parameters. Others such as Cabelo (2018)  use graph theory. The movement to regenerate the whole of quantum theory from more basic axioms e.g. of information, or probability itself is called quantum reconstruction, of which Qbism is an example (Ball 2017).

 

Zurek (1991, 2003, 2005) has introduced the notions of decoherence, quantum Darwinism and envariance – environment – assisted invariance, to explain the transition from quantum reality to the classical. Decoherence is the way third-party quanta disrupt the off-diagonal wave amplitudes of entanglement resulting in projection onto the “observed” classical states through exponential damping as in fig 71c. Quantum Darwinism enriches this picture by developing the notion that some quantum “pointer” states can be more robust to decoherence by replicating their information into the environment. Envariance describes this process in terms of quantum measurement, in which the environment becomes entangled with the apparatus of ideal von Neumann measurement, again promoting the transition to the classical. While these do not deal with the question of hidden variable theories versus randomness of uncertainty they have been claimed to derive the Born probabilities (Zurek 2005, Harris et al 2016) through multiple environmental interactions, illustrated by Laplace playing card probabilities in fig 77d. However all the approaches to independent derivation of the Born rule including envariance have been criticised as being logically circular (Schlosshauer  & Fine 2005, Landsman 2008).

 

Illustrating the difficulty of the problem, John Wheeler in 1983 proposed that statistical regularities in the physical world might emerge from such a situation, as they sometimes do from unplanned crowd behaviour (Ball 2019):

 

“Everything is built higgledy-piggledy on the unpredictable outcomes of billions upon billions of elementary quantum phenomena", Wheeler wrote. But there might be no fundamental law governing those phenomena — indeed, he argued, that was the only scenario in which we could hope to find a self-contained physical explanation, because otherwise we’re left with an infinite regression in which any fundamental equation governing behavior needs to be accounted for by some even more fundamental principle. “In contrast to the view that the universe is a machine governed by some magic equation, … the world is a self-synthesizing system,” Wheeler argued. He called this emergence of the lawlike behavior of physics “law without law.”

 

However the probability interpretation leads to the incorrect notion that quantum reality is somehow just a random process. Common opinions of processes like radioactive decay are treated as random by default, simply because they are indeterminate and don’t obey a fixed law. Study smarter, for example, states:

 

Radioactive decay is a random process, meaning it is impossible to predict when an atom will emit radiation. By the random nature of radioactive decay, we mean that for every atom, there are known probabilities that they will emit radiation (and thus decay radioactively) in the next second. Still, the fact that all we have is a probability makes this a random process. We can never determine ahead of time if an atom will decay in the next second or not. This is just like throwing a (fair, cubic) dice every second.

 

In effect, this is equating the quantum tunnelling of individual atoms to a dice throw, which is a chaotic classical process with geometric constraints, so it is equating quantum uncertainty with classical chaotic butterfly effect systems which might also have a quantum sensitivity.

 

Santha & Vazirani (1986) note:

 

Unfortunately, the available physical sources of randomness (including zener diodes and geiger counters) are imperfect. Their output bits are not only biased but also correlated.

 

Fig 77d: (1,2) Exponential decay of erratic radioactivity as the population of radioactive atoms becomes depleted. (3) Zener diode avalanche output. (4) Graph of the chaotic logistic iteration displaying interval-filling ergodicity in the frequency graph (4b), point plot 4(c). One can make a Born interpretation of this as a pseudo-wave function showing the relative probabilities of finding an iteration point at 0.5 and 0.9 by normalising the function over its integral, yet the process is deterministic. Therefore a probability interpretation does not imply randomness. (5,6) Quasi-random and pseudo-random or random 2-D distributions. (7) Sketch derivation of the Born formula derivation using Laplace probabilities of concealed and revealed playing cards (Zurek 2005).

 

Geiger counters measure quantum tunnelling in individual radioactive nuclei that reflect quantum uncertainty but are emitted very slowly over long periods, so the process is not random but one of erratic exponential decay over time. Zener diodes at high voltage undergo avalanche breakdown, which is a solid state feature that lets through a flood of electrons with a fixed relaxation time, so again it is not directly measuring uncertainty, but its compounded effects.

 

This is remarkably similar to chaotic molecular systems displaying a butterfly effect. Very simple algorithmic chaotic systems such as the logistic iteration, where x1 is chosen on (0,1) as a seed and the sequence is generated by modelling rabbit reproduction in a finite pasture (May 1976), where the chaotic phase is ergodic and asymptotic to an interval-filling stochastic process, when r = 4. This is shown in fig 77d, where this iteration generates an asymptotic frequency distribution, which has been normalised over its integral to produce a probability function playing the same role as as the squared wave function to give a probability interpretation parallel to the Born rule for an elementary deterministic discrete iteration, confirming the Born rule does not in any way imply that the basis of quantum uncertainty lies in randomness. 

 

The point distribution (4c) shows us closer detail that confirms this deterministic dynamical process displays pseudo-random features akin to (6), modulated by the overall probability distribution (4b), but there is a subtle anomaly in (4c) in the horizontal strip neighbouring y = 0.75. This does not appear in the probability distribution, which is asymptotically smooth for large n. The reason is that the iteration has two fixed point solutions to x = f(x): 0 an attractor, and 0.75 a chaotic repelling point. There are 2n +1 periodic points of period n, so in a classical chaotic iteration the unstable periodic points are dense, but these have measure zero, as a countable subset of [0, 1], so their probability of occurrence is zero, but their neighbouring points can be seen in (4c) as stationary butterfly effect exponential trajectories neighbouring the fixed point. None of this can happen in a quantum system and this detail is not accessible in the quantum situation either because we have recourse only to the Born rule, so the interference experiment reflects only what we see in (4b) – the macroscopic experimental arrangement, as a statistical particle approximation to the wave power, not the underlying process governing the individual outcomes. In fig 57, we see that the classical chaotic stadium billiard (1), becomes scarred by such repelling periodic orbits in the quantum situation, although open quantum systems like the quantum kicked top (2) become entangled.

 

Algorithmic processes approximating randomness for experimental purposes are classified as either pseudo-random, or quasi-random.

 

Pseudo-random numbers more closely simulate randomness, as in the pseudorandom number generator (PRNG), or deterministic random bit generator (DRBG) used in computational random functions. An example pseudo-random number generator is where An is the previous pseudo number generated, Z is a constant multiplier, I is a constant increment, and M is a constant modulus.

 

Quasi-random processes, also called low-discrepancy sequences, approach an even distribution more rapidly, but less faithfully than true randomness because they lack larger improbable fluctuations. They  are generated by a number of algorithms including Fauré, Halton and Sobol, each of which have short arithmetic computational procedures.

 

This leaves us in a position where the assumption of quantum randomness remains unestablished and in which a complex many-body non-local interaction, given the vast number of particles in the universe, could approximate the Born interpretation to the limits of any experimental error.

 

Summarising the interplay between the notion of “random” probabilistic wave function collapse and hidden variable theories, Symbiotic Existential Cosmology  favours the latter on the basis that:

 

(1) The verification of Bell entanglement was a confirmation of the EPR claim and Einstein’s quote:

 

God does not play dice with the universe.

 

(2) The transcausal view of quantum transactions being a complex underlying hidden variable process, which is also shared by (3) superdeterminism violating statistical independence, and involving additional non-local processes (4) non-IID processes in biology not converging to the classical and (5) theories in which quantum measurement may contradict objective reality through process entanglements extending beyond Bell state entanglements.

 

The intrinsic difficulty that all such theories since the Bohm’s pilot wave formulation face, is that expectations are for a master equation like Schrödinger’s, when any system is likely to be a vastly complex non-local many-body problem. This criticism of the assumption of randomness applies equally to molecular dynamics, mutational evolution and neurodynamics.

 

The Neuroscience Perspective

 

Complementing this description of the quantum world at large is the actual physics of how the brain processes information. By contrast with a digital computer, the brain uses both pulse coded action potentials and continuous gradients in an adaptive parallel network. Conscious states tend to be distinguished from subconscious processing by virtue of coherent phase fronts of the brain’s wave excitations. Phase coherence of beats between wave functions fig 71(c), is also the basis of quantum uncertainty.

 

In addition, the brain uses edge-of-chaos dynamics, involving the butterfly effect arbitrary sensitivity to small fluctuations in bounding conditions – and the creation of strange attractors to modulate wave processing, so that the dynamics doesn’t become locked into a given ordered state and can thus explore the phase space of possibilities, before making a transition to a more ordered state representing the perceived solution. Self-organised criticality is also a feature, as is neuronal threshold tuning. Feedback between the phase of brain waves on the cortex and the discrete action potentials of individual pyramidal calls, in which the phase is used to determine the timing of action potentials, creates a feedback between the continuous and discrete aspects of neuronal excitation. These processes, in combination, may effectively invoke a state where the brain is operating as an edge-of-chaos quantum computer by making internal quantum measurements of its own unstable dynamical evolution, as cortical wave excitons, complemented by discrete action potentials at the axonal level.

 

Chaotic sensitivity, combined with related phenomena such as stochastic resonance (Liljenström et al. 2005), mean that fractal scale-traversing handshaking (Grosu 2023) can occur between critically poised global brain states, neurons at threshold, ion-channels and the quantum scale, in which quantum entanglement of excitons can occur (King 2014). At the same time these processes underpin why there is ample room in physical brain processing for quantum uncertainty to become a significant factor in unstable brain dynamics, fulfilling Eccles (1986) notion that this can explain a role for consciousness, without violating any classically causal processes.

 

This means that brain function is an edge-of-chaos quantum dynamical system which, unlike a digital computer, is far from being a causally deterministic process which would physically lock out any role for conscious decision-making, but leaves open a wide scope for quantum uncertainty, consistent with a role for consciousness in tipping critical states. The key to the brain is thus its quantum physics, not just its chemistry and biology. This forms a descriptive overview of possible processes involved rather than an empirical proof, in the face of the failure of promissory materialistic neuroscience (Popper & Eccles 1984) to demonstrate physical causal closure of the universe in the context of brain function, so Occams razor cuts in the direction which avoids conflict with empirical experience of conscious volitional efficacy over the physical universe.

 

 

Fig 78: (1) Edge of chaos transitions model of olfaction (Freeman 1991). (2) Stochastic resonance as a hand-shaking process between the ion channel and whole brain states (Liljenström & Svedin 2005). (3) Hippocampal place maps (erdiklab.technion.ac.il). Hippocampal cells have also been shown to activate in response to desired locations in an animals anticipated future they have observed but not visited (Olafsdottir et al. 2015). (4) Illustration of micro-electrode recordings of local wave phase precession (LFP) enabling correct spatial and temporal encoding via discrete action potentials in the hippocampus (Qasim et al. 2021). Delta, theta, alpha beta and gamma bands appear to have distinct functions and dynamics and metastable and spiral waves have been noted as having neurocognitive function (Roberts et al. 2019, Xu et al. 2023, ). (5) Living systems are dynamical systems. They show ensembles of eigenbehaviors, which can be seen as unstable dynamical tendencies in the trajectory of the system. Fransisco Varela’s neurophenomenology (Varela 1996, Rudrauf et al. (2003) is a valid attempt to bridge the hard and easy problems, through a biophysics of being, by developing a complementary subjective account of processes corresponding to objective brain processing.  While these efforts help to elucidate the way brain states correspond to subjective experiences, using an understanding of resonant interlocking dynamical systems, they do not of themselves solve the subjective nature of the hard problem. (6) Joachim Keppler's (2018, 2021, James et al. 2022) view of conscious neural processing uses the framework of stochastic electrodynamics (SED), a branch of physics that affords a look behind the uncertainty of quantum field theory (QFT), to derive an explanation of the neural correlates of consciousness, based on the notion that all conceivable shades of phenomenal awareness are woven into the frequency spectrum of a universal background field, called zero-point field (ZPF), implying that the fundamental mechanism underlying conscious systems rests upon the access to information available in the ZPF. This gives an effective interface description of how dynamical brain states correspond to subjective conscious experiences, but like the other dynamical descriptions, does not solve the hard problem itself of why the zero point field becomes subjective.

 

Diverse Theories of Consciousness

 


Fig 79: Overview of Theories of Consciousness reviewed, with rough comparable positions.
Field or field-like theories are in

blue/magenta. Explicit AI support magenta/red. Horizontal positions guided by specific author statements.

Section links: GNW, ART, DQF, ZPF, AST, CEMI, FEM, IIT, PEM, ORCH, GRT, CFT

 

Descriptions of the actively conscious brain revolve around extremely diverse conceptions. The neural network approach conceives of the brain as a network of neurons connected by axonal-dendritic synapses, with action potentials as discrete impulses travelling down the long pyramidal cell axons through which activity is encoded as a firing rate. In this view the notions of “brain waves” as evidenced in the EEG (electroencephalogram) and MEG (magnetoencephalogram) are just the collective averages of these spikes, having no function in themselves, being just an accessory low intensity electromagnetic cloud associated with neuronal activity, which happens to generate a degree of coupled synchronisation through the averaged excitations of the synaptic web. At the opposite extreme are field theories of the conscious brain in which fields have functional importance in themselves and help to explain the “binding” problem of how conscious experiences emerge from global brain dynamics.

 

Into the mix are also abstract theories of consciousness such as Tononi and Koch’s (2015) IIT or integrated information theory and Graziano’s (2016) AST or attention schema theory, which attempt to formulate an abstract basis for consciousness that might arise in biological brains or synthetic neural networks given the right circumstances.

 

The mushroom experience that triggered Symbiotic Existential Cosmology caused a reversal of world view from my original point of view, King (1996), looking for the neurodynamic and quantum basis of consciousness in the brain, to realising that no such theory is possible because a pure physicalist theory cannot bridge the hard problem explanatory gap in the quantum universe, due to the inability to demonstrate causal closure.

 

No matter how fascinating and counter-intuitive the complexities of the quantum, physical and biological universe are, no purely physicalist description of the neurodynamics of consciousness can possibly succeed, because it is scientifically impossible to establish a theoretical proof, or empirical demonstration, of the causal closure of the physical universe in the context of neurodynamics. The bald facts are that, no matter to what degree we use techniques, from optogenetics, through EcoG, to direct cell recording, there is no hope within the indeterminacies of the quantum universe of making an experimental verification of classical causal closure. Causal closure of the physical universe thus amounts to a formally undecidable cosmological proposition from the physical point of view, which is heralded as a presumptive 'religious' affirmative belief without scientific evidence, particularly in neuroscience.

 

The hard problem of consciousness is thus cosmological, not biological, or neurodynamic alone. Symbiotic Existential Cosmology corrects this by a minimal extension of quantum cosmology by adding the axiom of primal subjectivity, as we shall see below.

 

In stark contrast to this, the subjective experiential viewpoint perceives conscious volition over the physical universe as an existential certainty that is necessary for survival. When any two live human agents engage in a frank exchange of experiences and communications, such as my reply to you all now, which evidences my drafting of a consciously considered opinion and intentionally sending it to you in physical form, this can be established beyond reasonable doubt by mutual affirmation of our capacity to consciously and intentionally respond with a physical communication. This is the way living conscious human beings have always viewed the universe throughout history and it is a correct veridical empirical experience and observation of existential reality, consistent with personal responsibility, criminal and civil law on intent, all long-standing cultural traditions and the fact that 100% of our knowledge of the physical world comes through our conscious experience of it. Neuroscientists thus contradict this direct empirical evidence at their peril. Hoffman, Prakash & Prentner (2023) have also produced a decisive critique of all physical theories of consciousness, invoking this point of view.

 

However there is still a practical prospect of refining our empirical understanding of the part played by neurodynamics in generating subjective conscious experience and volition over the physical universe through current and upcoming techniques in neuroscience. What these can do is demonstrate experimentally the nature of the neurodynamics occurring, when conscious experiences are evoked, the so-called "neural correlate of consciousness", forming an interface with conscious experience our and ensuing decision-making actions.

 

To succeed at this scientific quest, we need to understand how quantum cosmology enters into the formation of biological tissues. The standard model of physics is symmetry broken, between the colour, weak, and EM forces and gravity, which ensures that there are a hundred positively charged atomic nuclei, with orbital electrons having both periodic quantum properties of the s, p, d, & f, orbitals and non-linear EM charge interactions, centred on first row covalent H-CNO modified by P & S and light ionic and transition elements, as shown in fig 51, to form a fractal cooperative bonding cascade from organic molecules like the amino acids and nucleotides, through globular proteins and nucleic acids, to complexes like the ribosome, and membrane, to cell organelles, cells and tissues. These constitute an interactive quantum form of matter – the most exotic form of matter in existence, whose negentropic thermodynamics in living systems is vastly more challenging than the quantum properties of solid state physics and its various excitons and quasi-particles. Although these are now genetically and enzymatically encoded, the underlying fractal dynamics is a fundamental property of cosmological symmetry-breaking and abiogenesis. It introduces a cascade of quantum effects, in protein folding, allosteric active sites with tunnelling, membrane ionic and electron transport and ultimately neurodynamics. Furthermore biological processes are non IID, not constituting identical independently distributed quantum measurements, so do not converge to the classical description and remain collectively quantum in nature throughout, spanning all or most aspects of neuronal excitability and metabolism.

 

This means that current theories of the interface between CNS neurodynamics and subjective conscious volition are all manifestly incomplete and qualitatively and quantitatively inadequate to model or explain the brain-experience interface. Symbiotic Existential Cosmology has thus made a comprehensive review of these, including GNW (Dehane et al.), ART (Grossberg), DQF (Freeman & Vitiello), ZPF (Keppler), AST (Graziano), CEMI (McFadden), FEM (Solms & Friston), IIT (Tononi & Koch), PEM (Poznanski et al.), as well as outliers like ORCH (Hameroff & Penrose). The situation facing TOEs of consciousness are, despite experimental progress, in a more parlous state than physical TOEs, from supersymmetric, superstring, and membrane theories to quantum loop gravity, that as yet show no signs of unification over multiple decades. In both fields, this requires a foundation rethink and a paradigm shift. Symbiotic Existential Cosmology provides this to both fields simultaneously.

 

To understand this biologically, we need to understand that the nature of consciousness as we know it and all its key physical and biological features, arose in a single topological transition in the eucaryote endosymbiosis, when the cell membrane became freed for edge-of-chaos excitation and receptor-based social signalling, through the same processes that are key to human neurodynamics today, when respiration became sequestered in the mitochondria. This in turn led to the action potential via the flagellar escape reaction, and to the graded membrane potentials and neurotransmitter receptor-based synaptic neural networks we see in neuronal excitation. It took a billion years later before these purposive processes enabling sentience at the cellular level, in the quantum processes we now witness in vision, audition, olfaction and feeling sensation became linked in the colonial neural networks illustrated by hydra and later the more organised brains of arthropods, vertebrates and cephalopods. This means that a purely neural network view of cognition and consciousness is physically inadequate at the foundation. Moreover the brain derives its complexity not just from our genome which is vastly too small to generate the brain’s complexity, but interactive processes of cell migration in the developing brain that form a self organising system through mutual neuronal recognition by neurotransmitter type and mutual excitation/inhibition.

 

Of these theories, GNW is the closest to a broad brush strokes, empirically researched account. Neural network theories like Grossman’s ART generate crude necessary but insufficient conditions for consciousness because they lack almost all the biological principle involved. Pure abstract theories like IIT do likewise. Specialised quantum theories like Hameroff & Penrose are untenable both in current biology and fundamentally in evolutionary terms because they have been contrived as quantum back-pack of oddball quantum processes such as quantum microtubular CAs, not confluent with evolutionary processes, using increasingly contrived speculation to make up for inadequacies e.g. in linking cellular processes through condensates. ORCH is also objective reduction, so it cannot address conscious volition. 

 

There is good empirical support for two processes in brain dynamics. (1) Edge-of-chaos transitions from a higher energy more disordered dynamic to a less disordered attractor dynamic, which is also the basis of annealing in neural network models of a potential energy landscape. (2) Phase tuning between action potential timing in individual neurons and continuous local potential gradients, forming an analogue with quantum uncertainty based  measurement of wave beats. 

 

These mean that field and field like-theories such as ZPF, DQF and PEM all have a degree pf plausibility complementing bare neural network descriptions. However all these theories run into the problem of citing preferred physical mechanisms over the complex quantum system picture manifest in tissue dynamics. ZPF cites the zero-point field, effectively conflating a statistical semi-classical of QED with subjective consciousness as the quantum vacuum. It cites neurotransmitter molecular resonances at the synapse and periodic resonances in the brain as providing the link. DQF is well grounded in Freeman dynamics, but cites water molecule structures, which are plausible but accessory and not easy to test. PEM cites quasi-polaritonic waves involving interaction between charges and dipoles, with an emphasis on delocalised orbitals, which are just one of many quantum level processes prominently involved in respiration and photosynthesis and makes a claim to "microfeels" as the foundation of a definition of precognitive information below the level of consciousness. It also restricts itself to multiscale thermodynamic holonomic processes, eliminating the quantum level, self organised criticality and fractality.

 

Philosopher wins 25 year long bet with Neuroscientist (Lenharo 2023): In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness that it is an ongoing quest — and declared Chalmers the winner.

 

What ultimately helped to settle the bet was a study testing two leading hypotheses about the neural basis of consciousness (Cogitate Consortium et al. 2023). Consciousness is everything that a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says. However, despite a vast effort, researchers still don’t understand how our brains produce it. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.” It tested two of the leading hypotheses: integrated information theory (IIT) and global network workspace theory (GNWT). IIT proposes that consciousness is a ‘structure’ in the brain formed by a specific type of neuronal connectivity that is active for as long as a certain experience, such as looking at an image, is occurring. This structure is thought to be found in the posterior cortex, at the back of the brain. GNWT, by contrast, suggests that consciousness arises when information is broadcast to areas of the brain through an interconnected network. The transmission, according to the theory, happens at the beginning and end of an experience and involves the prefrontal cortex, at the front of the brain. The results didn’t perfectly match either of the theories. This has since resulted in an open letter treating IIT, in particular as pseudoscience (Fleming et al. 2023 Lenharo 2023b).

 

The position of Symbiotic Existential Cosmology is that none of these theories, and particularly those that depend on pure physical materialism, have any prospect of solving the hard problem and particularly the hard problem extended to volition. Symbiotic Existential Cosmology therefore adopts a counter strategy to add an additional axiom to quantum cosmology that associates primal subjectivity and free will with an interface in each quantum, where “consciousness” is manifested in the special relativistic space-time extended wave function and "free will" is manifested in the intrinsic uncertainty of quantum collapse to the particle state. This primal subjectivity exists in germinal forms in unstable quantum-sensitive systems such as butterfly effect systems and becomes intentional consciousness as we know it in the eucaryote transition.

 

This transforms the description of conscious dynamics into one in which subjectivity is compliant with determined perceived and cognitive factors but utilises the brain state as a contextual environmental filter to deal with states of existential uncertainty threatening the survival of the organism. This is similar to  AST, but without the utopian artificial intelligence emphasis it shares with others such as ART, IIT, and PEM. Key environmental survival questions are both computationally intractable and formally uncomputable, because the tiger that may pounce is also a conscious agent who can adapt their volitional strategy to unravel any computational "solution”. This provides a clean physical cut, in which subjective consciousness remains compliant with the determined boundary conditions realised by the cognitive brain, but has decision-making ability in situations when cellular or brain dynamics becomes unstable and quantum sensitive. No causal conflict thus arises between conscious intent restricted to uncertainty and physical causes related to the environmental constraints. It invokes a model of quantum reality where uncertainty is not merely random, but is a function of unfolding environmental uncertainty as a whole. This is the survival advantage cellular consciousness fixed in evolution through anticipating existential crises and has conserved ever since, complementing cerebral cognition in decision-making. This is reflected experientially in how we make intuitive "hunch" overall decisions and physically in certain super-causal forms of the transactional QM interpretation and super-determinism, both of which can have non-random quasi-ergodic hidden variable interpretations and are compatible with free will.

 

The final and key point is that Symbiotic Existential Cosmology is biospherically symbiotic. Through this, the entire cosmology sees life and consciousness as the ultimate interactive climactic crisis of living complexity interactively consummating the universe, inherited from cosmological symmetry-breaking, in what I describe as conscious paradise on the cosmic equator in space-time. Without the symbiosis factor, humanity as we stand, will not survive a self-induced Fermi extinction, caused by a mass extinction of biodiversity, so the cosmology is both definitively and informatively accurate and redemptive, in long-term survival of the generations of life over evolutionary time scales.

 

Susan Pockett (2013) explains the history of these diverging synaptic and field theoretic views:

 

Köhler (1940) did put forward something he called “field theory”. Köhler only ever referred to electric fields as cortical correlates of percepts. His field theory was a theory of brain function. Lashley’s test was to lay several gold strips across the entire surface of one monkey’s brain, and insert about a dozen gold pins into a rather small area of each hemispheric visual cortex of another monkey. The idea was that these strips or pins should short-circuit the hypothesized figure currents, and thereby (if Köhler’s field theory was correct) disrupt the monkeys’ visual perception. The monkeys performed about as well on this task after insertion of the pins or strips as they had before (although the one with the inserted pins did “occasionally fail to see a small bit of food in the cup”) and Lashley felt justified in concluding from this that “the action of electric currents, as postulated by field theory, is not an important factor in cerebral integration.” Later Roger Sperry did experiments similar to Lashley’s, reporting similarly negative results.

 

Intriguingly, she notes that Libet, whom we shall meet later despite declaring the readiness potential preceded consciousness, also proposed a near-supernatural field theory:

 

Libet proposed in 1994 that consciousness is a field which is “not ... in any category of known physical fields, such as electromagnetic, gravitational etc” (Libet 1994). In Libet’s words, his proposed Conscious Mental Field “may be viewed as somewhat analogous to known physical fields ... however ... the CMF cannot be observed directly by known physical means.”

 

Pockett (2014) describes what she calls “process theories”:

 

The oldest classification system has two major categories, dualist and monist. Dualist theories equate consciousness with abstracta. Monist (aka physicalist) theories equate it with concreta. A more recent classification (Atkinson et al., 2000) divides theories of consciousness into process theories and vehicle theories: it says “Process theories assume that consciousness depends on certain functional or relational properties of representational vehicles, namely, the computations in which those vehicles engage. The relative number of words devoted to process and vehicle theories in this description hints that at present, process theories massively dominate the theoretical landscape. But how sensible are they really?

 

She then discusses both Tononi & Koch’s (2015) IIT integrated information theory and Chalmers' (1996) multi-state “information spaces". And lists the following objections:

 

First, since information is explicitly defined by everyone except process theorists as an objective entity, it is not clear how process theorists can reasonably claim either that information in general, or that any subset or variety of information in particular, is subjective. No entity can logically be both mind-independent and the very essence of mind. Therefore, when process theorists use the word “information” they must be talking about something quite different from what everyone else means by that word. Exactly what they are talking about needs clarification. Second, since information is specifically defined by everybody (including Chalmers) as an abstract entity, any particular physical realization of information does not count as information at all. Third, it is a problem at least for scientists that process theories are untestable. The hypothesis that a particular brain process correlates with consciousness can certainly be tested empirically. But the only potentially testable prediction of theories that claim identity between consciousness and a particular kind of information or information processing is that this kind of information or information processing will be conscious no matter how it is physically instantiated.

 

These critiques will apply to a broad range of the theories of consciousness we have explored, including many in the figure above that do not limit themselves to the neural correlate of consciousness.

 

Theories of consciousness have, in the light of our understanding of brain processes gained from neuroscience, become heavily entwined with the objective physics and biology of brain function. Michel & Doerig (2021), in reviewing local and global theories of consciousness summarise current thinking, illustrating this dependence on neuroscience for understanding the enigmatic nature of consciousness.

 

Localists hold that, given some background conditions, neural activity within sensory modules can give rise to conscious experiences. For instance, according to the local recurrence theory, reentrant activity within the visual system is necessary and sufficient for conscious visual experiences. Globalists defend that consciousness involves the large-scale coordination of a variety of neuro-cognitive modules, or a set of high-level cognitive functions such as the capacity to form higher-order thoughts about one’s perceptual states. Localists tend to believe that consciousness is rich, that it does not require attention, and that phenomenal consciousness overflows cognitive access. Globalists typically hold that consciousness is sparse, requires attention, and is co-extensive with cognitive access.

 

According to local views, a perceptual feature is consciously experienced when it is appropriately represented in sensory systems, given some background conditions. As localism is a broad family of theories, what “appropriately” means depends on the local theory under consideration. Here, we consider only two of the most popular local theories: the micro-consciousness theory, and the local recurrence theory, focusing on the latter. According to the micro-consciousness theory “processing sites are also perceptual sites”. This theory is extremely local. The simple fact of representing a perceptual feature is sufficient for being conscious of that feature, given some background conditions. One becomes conscious of individual visual features before integrating them into a coherent whole. According to the local recurrence theory, consciousness depends on "recurrent" activity between low- and higher-level sensory areas. Representing a visual feature is necessary, but not sufficient for being conscious of it. The neural vehicle carrying that representation must also be subject to the right kind of recurrent dynamics. For instance, consciously perceiving a face consists in the feedforward activation of face selective neurons, quickly followed by a feedback signal to lower-level neurons encoding shape, color, and other visual features of the face, which in turn modulate their activity as a result.

 

The authors also stress post-dictive effects as a necessary non-local condition for consciousness which may last a third of a second after an event.

 

In postdictive effects, conscious perception of a feature depends on features presented at a later time. For instance, in feature fusion two rapidly successive stimuli are perceived as a single entity. When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk, and no red or green disk at all. This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having. Many other postdictive effects in the range of 10-150ms have been known for decades and are well documented. Postdictive effects are a challenge for local theories of consciousness. Features are locally represented in the brain but the participants report that they do not see those features.

 

This can have the implication that unconscious brain processes always precede conscious awareness, leading to the conclusion that our conscious awareness is just a post-constructed account of unconscious processes generated by the brain and that subjective consciousness, along the experience of volition have no real basis, leading to a purely physically materialist account of subjective consciousness as merely an internal model of reality constructed by the brain.

 

Pockett (2014) in supporting her own field theory of consciousness, notes structural features that may exclude certain brain regions from being conscious in their own right:

 

It is now well accepted that sensory consciousness is not generated during the first, feed-forward pass of neural activity from the thalamus through the primary sensory cortex. Recurrent activity from other cortical areas back to the primary or secondary sensory cortex is necessary. Because the feedforward activity goes through architectonic Lamina 4 of the primary sensory cortex (which is composed largely of stellate cells and thus does not generate synaptic dipoles) while recurrent activity operates through synapses on pyramidal cells (which do generate dipoles), the conscious em patterns resulting from recurrent activity in the ‘early’ sensory cortex have a neutral area in the middle of their radial pattern. The common feature of brain areas that can not generate conscious experience – which are now seen to include motor cortex as well as hippocampus, cerebellum and any sub-cortical area – is that they all lack an architectonic Lamina 4 [layer 4 of the cortex].

 

By contrast with theories of consciousness based on the brain alone, Symbiotic Existential Cosmology sees subjectivity as being a cosmological complement to the physical universe. It thus seeks to explain subjective conscious experience as a cosmological, rather than just a purely biological phenomenon, in a way which gives validation and real meaning to our experience of subjective conscious volition over the physical universe, expressed in all our behavioural activities and our sense of personal responsibility for our actions and leads towards a state of biospheric symbiosis as climax living diversity across the generations of life as a whole, ensuring our continued survival.

 

Theories of consciousness have, in the light of our understanding of brain processes gained from neuroscience, become heavily entwined with the objective physics and biology of brain function. Michel & Doerig (2021), in reviewing local and global theories of consciousness summarise current thinking, illustrating this dependence on neuroscience for understanding the enigmatic nature of consciousness.

 

Localists hold that, given some background conditions, neural activity within sensory modules can give rise to conscious experiences. For instance, according to the local recurrence theory, reentrant activity within the visual system is necessary and sufficient for conscious visual experiences. Globalists defend that consciousness involves the large-scale coordination of a variety of neuro-cognitive modules, or a set of high-level cognitive functions such as the capacity to form higher-order thoughts about one’s perceptual states. Localists tend to believe that consciousness is rich, that it does not require attention, and that phenomenal consciousness overflows cognitive access. Globalists typically hold that consciousness is sparse, requires attention, and is co-extensive with cognitive access.

 

According to local views, a perceptual feature is consciously experienced when it is appropriately represented in sensory systems, given some background conditions. As localism is a broad family of theories, what “appropriately” means depends on the local theory under consideration. Here, we consider only two of the most popular local theories: the micro-consciousness theory, and the local recurrence theory, focusing on the latter. According to the micro-consciousness theory “processing sites are also perceptual sites”. This theory is extremely local. The simple fact of representing a perceptual feature is sufficient for being conscious of that feature, given some background conditions. One becomes conscious of individual visual features before integrating them into a coherent whole. According to the local recurrence theory, consciousness depends on "recurrent" activity between low- and higher-level sensory areas. Representing a visual feature is necessary, but not sufficient for being conscious of it. The neural vehicle carrying that representation must also be subject to the right kind of recurrent dynamics. For instance, consciously perceiving a face consists in the feedforward activation of face selective neurons, quickly followed by a feedback signal to lower-level neurons encoding shape, color, and other visual features of the face, which in turn modulate their activity as a result.

 

The authors also stress post-dictive effects as a necessary non-local condition for consciousness which may last a third of a second after an event.

 

In postdictive effects, conscious perception of a feature depends on features presented at a later time. For instance, in feature fusion two rapidly successive stimuli are perceived as a single entity. When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk, and no red or green disk at all. This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having. Many other postdictive effects in the range of 10-150ms have been known for decades and are well documented. Postdictive effects are a challenge for local theories of consciousness. Features are locally represented in the brain but the participants report that they do not see those features.

 

This can also have implications that unconscious brain processes always precede conscious awareness, leading to the conclusion that our conscious awareness is just a post-constructed account of unconscious processes generated by the brain and that subjective consciousness, along the experience of volition have no real basis, leading to a purely physically materialist account of subjective consciousness as merely an internal model of reality constructed by the brain.

 

Seth & Bayne (2022) provide a detailed review of theories of consciousness from the perspective of neuroscience. They investigate four key types of TOC as listed below and also provide table 1 below listing a diverse range of TOCs.

 

(1) Higher-order theories  The claim uniting all these is that a mental state is conscious in virtue of being the target of a certain kind of meta­-representational state. These are not representations that occur higher or deeper in a processing hierarchy but are those that have as their targets other (implicitly subjective) representa­tions.

(2) Global workspace theories originate from architectures, in which a “blackboard” is a centralized resource through which specialised processors share and receive information. The first was framed at a cognitive level and proposed that con­scious mental states are those that are ‘globally avail­able’ to a wide range of cognitive processes, including attention, evaluation, memory and verbal report. Their core claim is that it is wide accessibility of information to such systems that constitutes conscious experience. This has been developed into ‘global neuronal work­ space theory’.

(3) Integrated information theory advances a mathematical approach to charac­terizing phenomenology. It starts by proposing axioms about the phenomenological character of con­scious experiences (that is, properties that are taken to be self-­evidently true and general to consciousness), and from these, it derives claims about the properties that any physical substrate of con­sciousness must satisfy, proposing that physical systems that instantiate these properties necessarily also instantiate consciousness.

(4) Re-entry and predictive processing theories The first associate conscious perception with top­down (recurrent, re­entrant) signalling. The second group are not primarily ToCs but more general accounts of brain (and body) function that can be used to formulate explanations and predictions regarding properties of consciousness.

 

They note a version of the measurement problem, that to test a theory of consciousness (ToC), we need to be able to reliably detect both consciousness and its absence. At present, experimenters tend to rely on a subject’s introspective capacities to identify their states of consciousness. However, they claim this approach is problematic. Firstly they claim reliability of introspection is questionable. This is a debatable claim, which tends to lead to devaluing subjective reports, possibly unfairly, in an emphasis on “objective observations”, which render subjective consciousness as having an orphan status, defeating the very purpose of TOCs in relation to the hard problem. They also note infants, individuals with brain damage and non-human animals, who might be conscious, but are unable to produce introspective reports, claiming there is a pressing need to identify non-introspective ‘markers’ or ‘signatures’ of consciousness — such as the perturbational complexity index (PCI) and the optokinetic nystagmus response, or distinctive bifurcations in neural dynamics, as markers of either general, or specific kinds of conscious contents. These however are purely functional measures of what consciousness actually is, as experienced phenomena.

 

Table 1: The full spread of TOCs, as listed in Seth & Bayne (2022).

 

Higher-order theory (HOT)

Consciousness depends on meta-representations of lower-order mental states

Self-organizing meta- representational theory

Consciousness is the brain’s (meta-representational) theory about itself

Attended intermediate representation theory

Consciousness depends on the attentional amplification of intermediate-level representations

Global workspace theories (GWTs)

Consciousness depends on ignition and broadcast within a neuronal global workspace where fronto-parietal cortical regions play a central, hub-like role

Integrated information theory (IIT)

Consciousness is identical to the cause–effect structure of a physical substrate that specifies a maximum of irreducible integrated information

Information closure theory

Consciousness depends on non-trivial information closure with respect to an environment at particular coarse-grained scales

Dynamic core theory

Consciousness depends on a functional cluster of neural activity combining high levels of dynamical integration and differentiation

Neural Darwinism

Consciousness depends on re-entrant interactions reflecting a history of value-dependent learning events shaped by selectionist principles

Local recurrency

Consciousness depends on local recurrent or re-entrant cortical processing and promotes learning

Predictive processing

Perception depends on predictive inference of the causes of sensory signals; provides a framework for systematically mapping neural mechanisms to aspects of consciousness

Neuro-representationalism

Consciousness depends on multilevel neurally encoded predictive representations

Active inference

Although views vary, in one version consciousness depends on temporally and counterfactually deep inference about self-generated actions

Beast machine theory

Consciousness is grounded in allostatic control-oriented predictive inference

Neural subjective frame

Consciousness depends on neural maps of the bodily state providing a first-person perspective

Self comes to mind theory

Consciousness depends on interactions between homeostatic routines and multilevel interoceptive maps, with affect and feeling at the core

Attention schema theory

Consciousness depends on a neurally encoded model of the control of attention

Multiple drafts model

Consciousness depends on multiple (potentially inconsistent) representations rather than a single, unified representation that is available to a central system

Sensorimotor theory

Consciousness depends on mastery of the laws governing sensorimotor contingencies

Unlimited associative learning

Consciousness depends on a form of learning which enables an organism to link motivational value with stimuli or actions that are novel, compound and non-reflex inducing

Dendritic integration theory

Consciousness depends on integration of top-down and bottom-up signalling at a cellular level

Electromagnetic field theory

Consciousness is identical to physically integrated, and causally active, information encoded in the brain’s global electromagnetic field

Orchestrated objective reduction

Consciousness depends on quantum computations within microtubules inside neurons

 

 

In addressing the  ‘hard problem’ they distinguish the easy problems concerned with the functions and behaviours associated with consciousness, from the hard problem, which concerns the experiential dimensions of consciousness, noting that what makes the hard problem hard is the ‘explanatory gap’ — the intuition that there seems to be no prospect of a fully reductive explanation of experience in physical or functional terms.

 

Integrated information theory and certain versions of higher-order theory address the hard problem directly, while other theories such as global workspace theories focus on the functional and behavioural properties normally associated with consciousness, rather than the hard problem, noting that some predictive processing theorists aim to provide a framework in which various questions about the phenomenal properties of consciousness can be addressed, without attempting to account for the existence of phenomenology — an approach called the ‘real problem’.

 

They posit that a critical question is whether the hard problem is indeed a genuine challenge that ought to be addressed by a science of consciousness, or whether it ought to be dissolved rather than solved as the solving easy problems first strategy invokes. The ‘dissolvers’ argue that the appearance of a distinctively hard problem derives from the peculiar features of the ‘phenomenal concepts’ that we employ in representing our own conscious states, citing illusionism, in which we do not have phenomenal states but merely represent ourselves as having such states, speculating that the grip of the hard problem may loosen as our capacity to explain, predict and control both phenomenological and functional properties of consciousness expands, thus effectively siding with the dissolvers.

 

In conclusion, they note that present, ToCs are generally used as ‘narrative struc­tures’ within the science of consciousness. Although they inform the interpretation of neural and behavioural data, they demure that it is still rare for a study to be designed with questions of theory validation in mind. Although there is nothing wrong with employing theories in this manner, claiming future progress will depend on experiments that enable ToCs to be tested and disambiguated. This is the kind of ideal that we will expect physicalist neuroscientists to veer into, but it runs the risk of ‘sanitising’ consciousness, just as behaviourism has  done in psychology to its nemesis.

 

Pivotal are two questions, one is the physicalist quest to use the easy functionalist notions of consciousness to explain away the hard problem of consciousness, which typifies Levine’s explanatory gap, Nagel’s what it is “to be like” something something conscious and Chalmers’ notion “how we have phenomenal first-person subjective experiences”. This is really not about the general questions of consciousness, such as “consciousness of” something, which can be viewed as a form of global attention that can be described functionally, and more specific notions like self-consciousness i.e. awareness of a form of functional agency, both of which could apply equally to artificial intelligence.

 

This becomes clear when we examine the authors’ choice of key theories of consciousness, several of which are not targeted at the hard problem at all, as they point out, knowing that Seth for example favours an ultimate functional explanation which will “dissolve” the hard problem, even if it is a form of identity theory, or dual aspect monism.

 

Really we need to distinguish consciousness from subjective consciousness – the ability to have subjective experiences and thus subjectivity itself and its cosmological status, rather than the mere functionality of consciousness as a global attentive process. This is why Symbiotic Existential Cosmology deals directly with primal subjectivity as a cosmological complement to the physical universe to capture the notion of subjectivity squarely and independently of consciousness. This leaves full consciousness an emergent property of the eucaryote endo-symbiosis that results in the cellular mechanisms of edge-of-chaos excitable membrane and informational membrane signalling using neurotransmitters, both of which are functionally emergent properties but with non-classical implications in the quantum universe.

 

We can immediately see this is a critically important step, when we see the above research being cited as a basis to determine whether future AI developments would be considered “conscious”, as Butlin et al. (2023) cite precisely the functional expressions of the same theories of consciousness as above, to provide criteria where a purely objective physical process could become “conscious”, in view of its functional properties in recurrent processing, global workspace, higher-order processes, attention schemas predictive processing and functional agency,  none of which address the hard problem, let alone the extended hard problem of subjective volition over the physical universe.

 

Butlin et al. note: This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness. From these theories we derive ”indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

 

Recurrent processing theory

RPT-1: Input modules using algorithmic recurrence

RPT-2: Input modules generating organised, integrated perceptual representations

Global workspace theory

GWT-1: Multiple specialised systems capable of operating in parallel (modules)

GWT-2: Limited capacity workspace, entailing a bottleneck in information flow and a selective attention mechanism

GWT-3: Global broadcast: availability of information in the workspace to all modules

GWT-4: State-dependent attention, giving rise to the capacity to use the workspace to query modules in succession to perform complex tasks

Computational higher-order theories

HOT-1: Generative, top-down or noisy perception modules

HOT-2: Metacognitive monitoring distinguishing reliable perceptual representations from noise

HOT-3: Agency guided by a general belief-formation and action selection system, and a strong disposition to update beliefs in accordance with the outputs of metacognitive monitoring

HOT-4: Sparse and smooth coding generating a “quality space”

Attention schema theory

AST-1: A predictive model representing and enabling control over the current state of attention

Predictive processing

PP-1: Input modules using predictive coding

Agency and embodiment

AE-1: Agency: Learning from feedback and selecting outputs so as to pursue goals, especially where this involves flexible responsiveness to competing goals

AE-2: Embodiment: Modeling output-input contingencies, including some systematic effects, and using this model in perception or control

Table 2: Indicator Properties (Butlin et al. 2023).

Polák & Marvan (2019), in a different kind of “dissolving” approach to the hard problem, attempt to assert dual theories, in which scientists study pairs of phenomenal mental states of which one is and the other is not conscious, the presence/absence of consciousness being their sole distinguishing feature, claiming this facilitates unpacking the unitary nature of the hard problem, thus partly decomposing it.

 

They note that Chalmers (2018 30) contains the acceptance of unconscious sensory qualities, saying such a move is: perhaps most promising for deflating the explanatory gap tied to qualities such as redness: if these qualities [...] can occur unconsciously, they pose less of a gap. As before, however, the core of the hard problem is posed not by the qualities themselves but by our experience of these qualities: roughly, the distinctive phenomenal way in which we represent the qualities or are conscious of them.

 

They cite two examples of separation of brain processes forming a neural correlate conscious experiences. The first is hemispheric visual neglect caused by localised brain damage such as a stroke, where information in the neglected hemisphere appears to unconsciously influence a person’s choices:

 

Unilateral visual neglect, the inability to see objects in one half of the visual field, might serve as an illustration. In the most famous neglect example (Marshall and Halligan, 1988), a person cannot consciously discriminate between two depicted houses. The houses are identical except that one of them is on fire in that half of the visual field the person, due to neglect, cannot see. Although the person was constantly claiming that both houses look the same to her, she repeatedly said she would prefer to live in the house not consumed by the flames.

 

A second example cites Lamme’s (2006, 2015) theory of brain processes which may constitute separate phases in the generation of a conscious experience, which permit clean separation of the brain mechanisms for the creation of phenomenal content from the mechanism that “pushes this content into consciousness”:

 

The theory revolves around the notion of local recurrent neural activity within the cortex and decomposes the formation of conscious visual content into two phases. The first one is called fast feedforward sweep. It is a gradual activation of different parts of the visual system in the brain. The dual view interprets this process as the formation of the unconscious but phenomenal mental state. A later process, that may or may not occur, is called recurrent activity. It is a neural feedback processing during which higher visual centers send the neural signal back to the lower ones. The time delay between the initiation of the first and the second process might be seen as corresponding to the difference between processing of the phenomenal character (feedforward sweep) and making and maintaining this phenomenal character conscious (recurrent processing).

 

They note that in several other theories already listed, including Global Neural Workspace theory, thalamo-cortical circuits, and apical amplification within the cortical pyramidal neurons, the phase of phenomenal content creation and the phase of this content becoming conscious are distinguishable. But all these theories are describing purely physical brain processes, being imbued with subjective aspects only by inference. So we need to look carefully at how the authors treat subjectivity itself. Essentially they are making a direct attack on the unitary nature of subjective conscious experience by attempting to separate consciousness from phenomenal experience so subjectivity is being held hostage in the division:

 

What constantly fuels this worry, we believe, is taking the conscious subjective phenomenal experience to be something monolithic. The peculiar nature of subjective qualities and their being conscious comes as a package and it is difficult to conceive how science might begin explaining it.  … The conscious subjective experience is being felt as something unitary, we grant that. But that does not mean that if we look behind the subjective level and try to explain how such unitary experience arises, the explanation itself has to have unitary form. … Awareness in this sense is simply the process, describable in neuroscientific terms, of making the sensory qualities conscious for the subject. We could then keep using the term “consciousness” for the subjectively felt unitary experience, while holding that in reality this seemingly unitary thing is the result of an interaction between the neural processes constituting the phenomenal contents and the neural processes constituting awareness.

 

This effectively a form of physicalist illusionism (Frankish 2017), because, the claim made is that the subjective experience is falsely represented as integrated when the underlying physical reality is subdivided by the dual interpretation. It is an illustration of how functionalist theories of consciousness can be misused in an attempt to form a bridgehead decomposing the unitarity of subjective consciousness into interacting divisible physical systems, simply because multiple physical processes are held to be functional, or temporally sequential components, of the associated integrated brain processing state. The trouble with this is that these functional processes can invoke an integrated conscious experience only when they are functionally confluent, so we can’t actually separate “the fast feedforward sweep” from the “recurrent activity” in generating a real conscious experience and in pathological cases like hemispherical visual neglect this provides no evidence that healthy integrated conscious brain processes can be so decomposed into dual states.

 

By contrast with theories of consciousness based on the physical brain alone, in Symbiotic Existential Cosmology, subjectivity is itself a primal cosmological complement to the physical universe. It thus explains subjective conscious experience as a cosmological, rather than just a purely biological or neuroscience phenomenon, thus giving validation and real meaning to our experience of subjective conscious volition over the physical universe, expressed in all our behavioural activities and our sense of personal responsibility for our actions and leads towards a state of biospheric symbiosis as climax living diversity across the generations of life as a whole, ensuring our continued survival.

  

Psychotic Fallacies of the Origin of Consciousness

 

Theories of consciousness that are poles apart from any notion of the subjectivity of conscious experience, or the hard problem of consciousness and the explanatory gap of the physical description, arise from treating consciousness merely as purely a type of culturally derived cognitive process. Such theories fall into the philosophers trap of confining the nature of the discourse to rational processes and arguments, which fail to capture the raw depths of subjective experience, characteristic of mystical, shamanic and animistic  cultures.  

 

In "The Origin of Consciousness in the Bicameral Mind”, Julian Jaynes (1976, 1986) claimed human “ancestors", as late as the Ancient Greeks did not consider emotions and desires as stemming from their own minds but as the consequences of actions of gods external to themselves. The theory posits that the human mind once operated in a bicameral state in which cognitive functions were divided between one part of the brain which appears to be "speaking", and a second part which listens and obeys and that the breakdown of this division gave rise to “consciousness” in humans. He used the term "bicameral" metaphorically to describe a mental state in which the right hemisphere's experiences were transmitted to the left hemisphere through auditory hallucinations.  In the assumed bicameral phase, individuals lacked self-awareness and introspection. Instead of conscious thought, they heard external voices or "gods" guiding their actions and decisions. Jaynes claimed this form of consciousness, devoid of meta-consciousness and autobiographical memory, persisted until about 3,000 years ago, when societal changes led to the emergence of our current conscious mode of thought. Auditory hallucinations experienced by those with schizophrenia, including command hallucinations, paralleled the external guidance experienced by bicameral individuals implying mental illness was a bicameral remnant.

 

To justify his claim, he highlighted instances in ancient texts like the Iliad and the Old Testament where he claimed there was no evidence of introspection or self-awareness and noted that gods in ancient societies were numerous and anthropomorphic, reflecting the personal nature of the external voices guiding individuals. However in the Epic of Gilgamesh, copies of which are many centuries older than even the oldest passages of the Old Testament, describes introspection and other mental processes.

 

According to Jaynes, language is a necessary but not sufficient condition for consciousness: language existed thousands of years earlier, but consciousness could not have emerged without language. Williams (2010) defends the notion of consciousness as a social–linguistic construct learned in childhood, structured in terms of lexical metaphors and narrative practice. Ned Block's (1981) review criticism is  direct – that it is "ridiculous" to suppose that consciousness is a cultural construction.

 

Jaynes argued that the breakdown of the bicameral mind was marked by societal collapses and environmental challenges. As people lost contact with external voices, practices like divination and oracles emerged as attempts to reconnect with the guidance they once received. However this shows an ethnocentric rationalist lack of awareness and understanding of how earlier animistic cultures perceived the natural world, in which both humans and natural processes like storms, rivers and trees were imbued with spirits that were interacted with, but by no means were regarded as voices which humans had to blindly obey, but ones in which they were in dynamic interaction as sentient beings. There are diverse existing cultures, from the founding San to the highly evolved Maori, who practice animistic beliefs, actually and metaphorically who were not influenced by political upheavals at the periphery of founding urban cultures and can appreciate their world views in both rational and spiritual terms, while at all times being as fully integrated in their conscious experiences as modern dominant cultures. We know that doctrinal religions have evolved from mystical and animistic roots as means to hold together larger urban societies, but these are no more rational beliefs. Neither are polytheists more bicameral in their thinking than monotheists are, but less starkly absolute. Neither is it true that intelligent primates display evidence of a bicameral mind, but rather a fully adapted social intelligence, attuned by social evolution to facilitate their strategic survival as consciously aware intentional agents.

 

McGilchrist (2009) reviews scientific research into the complementary role of the brain's hemispheres, and cultural evidence, in his book "The Master and His Emissary",  proposing that, since the time of Plato, the left hemisphere of the brain (the "emissary" in the title) has increasingly taken over from the right hemisphere (the "master"), to our detriment. McGilchrist felt that Jaynes's hypothesis was "the precise inverse of what happened" and that rather than a shift from bicameral mentality there evolved a separation of the hemispheres into bicameral mentality.  This has far more reality value in the fact that the dominance of rational discourse over subjective conscious experience has risen  to the degree that many people cannot rationally distinguish themselves from computational machines.

 

Field and Wave Theories of Consciousness v Connectome Networks and Action Potentials

 

Brain dynamics are a function of a variety of interacting processes. Major pyramidal neuron axon circuits functionally connect distant regions of the cortex to enable integrated processing forming the axonal connectome of the networked brain, driven by individual pulse-coded action potentials. Complementing this are waves of continuous potential in the cortical brain tissue indirectly sampled by electrodes on the scalp in the electroencephalogram or EEG and magnetic effects of currents in MEG. While the network view of brain activity is based on individual action potentials and regards the EEG brain waves as just tissue excitation averages, there is increasing evidence of phase coupling between between the two, so that both the discrete action potentials and the continuous tissue potentials are in mutual feedback. The complex interaction of these can be seen in Qasim et al. (2021), Cariani & Baker (2022) and Pinotsis et al. (2023), as exemplified in the independent dynamics of the various EEG bands and spiral and metastable wave states (Roberts et al. 2019, Xu et al. 2023). This leads to two views of brain dynamics the networked view based on the connectome and field theories centered on continuous tissue gradients and the folded tissue anatomy.

 

Pang et al. (2023) have compared the influence of these two physical features in the outer folds of the cerebral cortex, where most higher-level brain activity occurs — and the connectome, the web of nerves that links distinct regions of the cerebral cortex. Excited neurons in the cerebral cortex can communicate their state of excitation to their immediate neighbours on the surface. But each neuron also has a long axon that connects it to a far away region within or beyond the cortex, allowing neurons to send excitatory messages to distant brain cells. In the past two decades, neuroscientists have painstakingly mapped this web of connections — the connectome — in a raft of organisms, including humans. The brain’s neuronal excitation can also come in waves, which can spread across the brain and travel back in periodic oscillations.

 

They found that the shape of the outer surface was a better predictor of brainwave data than was the connectome, contrary to the paradigm that the connectome has the dominant role in driving brain activity.  Predictions from neural field theory, an established framework for modelling large-scale brain activity, suggest that the geometry of the brain may represent a more fundamental constraint on dynamics than complex interregional connectivity.

 

Fig 79b: Comparison of the influences of connectome network based processing, volumetric wave modes in the cortex and exponential distance rule (EDR) networks connectivity and found geometric eigenmodes to be predominant.

 

They calculated the modes of brainwave propagation for the cortical surface and for the connectome. As a model of the connectome, they used information gathered from diffusion magnetic resonance imaging (MRI), which images brain anatomy. They then looked at data from more than 10,000 records of functional MRI, which images brain activity based on blood flow. The analysis showed that brainwave modes in the resting brain as well as during a variety of activities — such as during the processing of visual stimuli — were better explained by the surface geometry model than by the connectome.of activities — such as during the processing of visual stimuli — were better explained by the surface geometry model than by the connectome one, the researchers found.

 

There are a number of field theories of conscious brain dynamics each with their own favoured process.

 

 Benjamin Libet (1994), the controversial discoverer of the readiness potential, notes the extreme contrast between the integral nature of conscious experience and the complex localised nature of network-based neurodynamics, leaning towards a field theory as the only plausible explanation:

 

One of the most mysterious and seemingly intractable problems in the mind-brain relationship is that of the unitary and integrated nature of conscious experience. We have a brain with an estimated 100 billion neurons, each of which may have thousands of interconnections with other neurons. It is increasingly evident that many functions of cerebral cortex are localized. This is not merely true of the primary sensory areas for each sensory modality, of the motor areas which command movement, and of the speech and language areas, all of which have been known for some time. Many other functions now find other localized representations, including visual interpretations of colour, shape and velocity of images, recognition of human faces, preparation for motor actions, etc. Localized function appears to extend even to the microscopic level within any given area. The cortex appears to be organized into functional and anatomical vertical columns of cells, with discrete interconnections within the column and with other columns near and far, as well as with selective subcortical structures.

 

In spite of the enormously complex array of localized functions and representations, the conscious experiences related to or elicited by these neuronal features have an integrated and unified nature. Whatever does reach awareness is not experienced as an infinitely detailed array of widely individual events. It may be argued that this amazing discrepancy between particularized neuronal representations and unitary integrated conscious experiences should simply be accepted as part of a general lack of isomorphism between mental and neural events. But that would not exclude the possibility that some unifying process or phenomenon may mediate the profound transformation in question.

 

The general problem had been recognized by many others, going back at least to Sherrington (1940) and probably earlier. Eccles (in, Popper and Eccles, 1977, p. 362) specifically proposed that the experienced unity comes not from a neurophysiological synthesis but from the proposed integrating character of the self-conscious mind. This was proposed in conjunction with a dualist-interactionist view in which a separate non-material mind could detect and integrate the neuronal activities. Some more monistically inclined neuroscientists have also been arriving at related views, i.e. that integration seems to be best accountable for in the mental sphere even if one views subjective experience as an inner quality of the brain "substrate" (as in "identity theory" or as an emergent property of it. There has been a growing consensus that no single cell or group of cells is likely to be the site of a conscious experience, but rather that conscious experience is an attribute of a more global or distributed function of the brain.

 

A second apparently intractable problem in the mind-brain relationship involves the reverse direction. There is no doubt that cerebral events or processes can influence, control and presumably "produce" mental events, including conscious ones. The reverse of this, that mental processes can influence or control neuronal ones, has been generally unacceptable to many scientists on (often unexpressed) philosophical grounds. Yet, our own feelings of conscious control of at least some of our behavioural actions and mental operations would seem to provide prima facie evidence for such a reverse interaction, unless one assumes that these feelings are illusory. Eccles (1990; Popper and Eccles, 1977) proposed a dualistic solution, in which separable mental units (called psychons) can affect the probability of presynaptic release of transmitters. Sperry (1952, 1985, 1980) proposed a monistic solution, in which mental activity is an emergent property of cerebral function; although the mental is restrained within a macro-deterministic frame- work, it can "supervene", though not "intervene", in neuronal activity. However, both views remain philosophical theories, with explanatory power but without experimentally testable formats. As one possible experimentally testable solution to both features of the mind-brain relationship, I would propose that we may view conscious subjective experience as if it were a field, produced by appropriate though multifarious neuronal activities of the brain.

 

Miller, Brincat & Roy (2024) Noting that cognition relies on the flexible organisation of neural activity, explore how many aspects of this organisation can be described as emergent properties, not reducible to their constituent parts. They discuss how electrical fields in the brain can serve as a medium for propagating activity nearly instantaneously, and how population-level patterns of neural activity can organise computations through subspace coding. They

note several aspects of brain waves, as opposed to network connections, including those in the alpha, beta, gamma theta and delta bands, which class them as critical emergent properties.

 

Fig 79c: Subspace coding. (a) Example of patterns of neural population activity for two different visual objects. (b) Population activity patterns can be thought of as points in a high-dimensional ‘state-space’, with one dimension for each neuron (three are shown, corresponding to the three labeled neurons in (a), but actual experiments sample hundreds from an underlying population of millions. (c) Activity for different conditions is typically restricted to a low-dimensional subspace (represented by a plane). (d) Information that must be kept separate is often encoded in orthogonal subspaces, so it can be read out independently. (e) When information must be mapped onto a common response, it is often encoded into aligned subspaces, so it can be driven by a single read-out.

 

Traditional synaptic connectivity is limited by the speed of axonal conduction and synaptic transmission. In contrast, because the brains electric fields have a direct effect on intracellular potential, they spread at the speed of an electric field in neural tissue, i.e. nearly instantaneously. This seems ideal for rapidly coordinating local neural activity.

 

Spatial computing proposes that mesoscale patterns of alpha/beta activity carry top-down control signals, which reflect information about the current context and goals. These alpha/beta patterns are inhibitory and spatially constrain the bottom-up gamma power associated with content-related spiking at a microscale level. In essence, alpha/beta patterns act as stencils, allowing content (microscale gamma/spiking) to be expressed in areas where alpha/beta is absent. These stencils represent different cognitive states or task operations. This is in line with observations that power and coupling in gamma versus alpha/beta are respectively associated with bottom-up processing versus top-down control.

A key benefit of such subspace coding is the organization of neural processing. When multiple pieces of information must be held simultaneously in memory or compared, they are often stored in approximately orthogonal subspaces. That is, the spiking patterns reflecting one item are independent of those for the other item, minimizing interference between them. Similarly, as incoming sensory information is encoded into working memory, it is rotated from a ‘sensory’ subspace to an orthogonal ‘memory’ subspace, protecting it from interference from further sensory inputs. Thus, subspace coding allows distinct information to be stored and operated on independently.

 

This separation of content (gamma/spiking) and control (alpha/beta) into different spatial scales endows generalization and flexibility. It enables the brain to perform top-down operations without ‘knowing’ the specifics of the underlying ensembles carrying content. This allows the brain to instantly generalize control to new content. Contrast this with standard neural network models, where all information — content and control — is encoded at the same level, synaptic connectivity. As a result, a standard network model learning a task with one set of objects needs retraining to perform with a new set of objects. Your brain does not need retraining to instantly generalize.

 

 

Fig 79d: Example neural power spectrum with a strong alpha peak in the canonical frequency range (8–12 Hz, blue-shaded region) and secondary beta peak within an overall 1/f pink noise edge of chaos power spectrum profile.

 

Donoghue et al. (2020) in the development of analytical technique seek to separate the assumed periodic nature of  alpha, beta, gamma theta and delta bands from a common signal of broad spectrum chaotic activity with a 1/f-like distribution, with exponentially decreasing power across increasing frequencies, equivalent to the negative slope of the power spectrum when measured in log–log space. Treating the aperiodic component as ‘noise’ ignores its physiological correlates, which in turn relate to cognitive and perceptual states, while trait-like differences in aperiodic activity have been shown to be potential biological markers in development and ageing, as well as disease, such as ADHD or schizophrenia. Their technique seeks to both enable investigation of this aperiodic component while clarifying the periodic basis of interacting signals and their wave phase.

 

There are a number of field theories of conscious brain dynamics each with their own favoured process.

 

Joachim Keppler (2018, 2021, 2024) presents an analysis drawing conscious experiences into the orbit of stochastic electrodynamics (SED) a form of quantum field theory, utilising the conception that the universe is imbued with an all-pervasive electromagnetic background field, the zero-point field (ZPF), which, in its original form, is a homogeneous, isotropic, scale-invariant and maximally disordered ocean of energy with completely uncorrelated field modes and a unique power spectral density. This is basically a stochastic treatment of the uncertainty associated with the quantum vacuum in depictions such as the Feynman approach to quantum electrodynamics (fig 71(e)). The ZPF is thus the multiple manifestations of uncertainty in the quantum vacuum involving virtual photons, electrons and positrons, as well as quarks and gluons, implicit in the muon's anomalous magnetic moment (Borsanyi et al. 2021).

 

In the approach of SED (de la Peña et al. 2020), in which the stochastic aspect corresponds to the effects of the collapse process into the classical limit [28], consciousness is represented by the zero point field (ZPF) (Keppler 2018). This provides a basis to discuss the brain dynamics accompanying conscious states in terms of two hypotheses concerning the zero-point field (ZPF):

 

“The aforementioned characteristics and unique properties of the ZPF make one realize that this field has the potential to provide the universal basis for consciousness from which conscious systems acquire their phenomenal qualities. On this basis, I posit that all conceivable shades of phenomenal awareness are woven into the fabric of the background field. Accordingly, due to its disordered ground state, the ZPF can be looked upon as a formless sea of consciousness that carries an enormous range of potentially available phenomenal nuances. Proceeding from this postulate, the mechanism underlying quantum systems has all the makings of a truly fundamental mechanism behind conscious systems, leading to the assumption that conscious systems extract their phenomenal qualities from the phenomenal color palette immanent in the ZPF. ”

 

Fig 80: In Keppler's model, the phase transitions underlying the formation of coherent activity patterns (attractors) are triggered by modulating the concentrations of neurotransmitters. When the concentration of neurotransmitter molecules lies above a critical threshold and selected ZPF modes are in resonance with the characteristic transition frequencies between molecular energy levels, receptor activations ensue that drive the emergence of neuronal avalanches. The set of selected ZPF modes that is involved in the formation and stabilisation of an attractor determines the phenomenal properties of the conscious state.

 

His description demonstrates the kind of boundary conditions in brain dynamics likely to correspond to subjective states and thus provides a good insight into the stochastic uncertainties of brain dynamics of conscious states that would correspond to the subjective aspect, and it even claims to envelop all possible modes of qualitative subjectivity in the features of the ZPF underlying uncertainty, But it would remain to be established that the ZPF can accomodate all the qualitative variations spanning the senses of sight, sound and smell, which may rather correspond to the external quantum nature of these senses.

 

The ZPF does not of itself solve the hard problem as such, because, at face value it is a purely physical manifestation of quantum uncertainty with no subjective manifestation, however Keppler claims to make this link clear as well:   A detailed comparison between the findings of SED and the insights of Eastern philosophy reveals not only a striking congruence as far as the basic principles behind matter are concerned. It also gives us the important hint that the ZPF is a promising candidate for the carrier of consciousness, suggesting that consciousness is a fundamental property of the universe, that the ZPF is the substrate of consciousness and that our individual consciousness is the result of a dynamic interaction process that causes the realization of ZPF information states. …In that it is ubiquitous and equipped with unique properties, the ZPF has the potential to define a universally standardized substratum for our conscious minds, giving rise to the conjecture that the brain is a complex instrument that filters the varied shades of sensations and emotions selectively out of the all-pervasive field of consciousness, the ZPF (Keppler, 2013).

 

In personal communication regarding these concerns, Joachim responds as follows:

 

I understand your reservations about conventional field theories of consciousness. The main problem with these approaches (e.g., McFadden’s approach) is that they cannot draw a dividing line between conscious and unconscious field configurations. This leads to the situation that the formation of certain field configurations in the brain is claimed to be associated with consciousness, while the formation of the same (or similar) field configurations in an electronic device would usually not be brought in relation with consciousness. This is what you call quite rightly a common category error.  Now, the crucial point is that the ZPF, being the primordial basis of the electromagnetic interaction, offers a way to avoid this category error. According to the approach I propose, the ZPF (with all its field modes) is the substrate of consciousness, everywhere and unrestrictedly. The main difference between conscious and unconscious systems (processes) is their ability to enter into a resonant coupling with the ZPF, resulting in an amplification of selected ZPF modes. Only a special type of system has this ability (the conditions are described in my article). If a system meets the conditions, one must assume that it also has the ability to generate conscious states.

 

Keppler, J., and Shani, I. (2020) link this process to a form of cosmopsychism confluent with Symbiotic Existential Cosmology:

 

The strength of the novel cosmopsychist paradigm presented here lies in the bridging of the explanatory gap the conventional materialist doctrine struggles with. This is achieved by proposing a comprehensible causal mechanism for the formation of phenomenal states that is deeply rooted in the foundations of the universe. More specifically, the sort of cosmopsychism we advocate brings a new perspective into play, according to which the structural, functional, and organizational characteristics of the NCC are indicative of the brains interaction with and modulation of a UFC. In this respect, the key insights from SED suggest that this field can be equated with the ZPF and that the modulation mechanism is identical with the fundamental mechanism underlying quantum systems, resulting in our conclusion that a coherently oscillating neural cell assembly acquires its phenomenal properties by tapping into the universal pool of phenomenal nuances predetermined by the ZPF.

 


Fig 80b (Left): It is postulated that conscious systems must be equipped with a fundamental mechanism by means of which they are able to influence the basic structure of the ubiquitous field of consciousness (UFC). This requires the interaction of a physical system with the UFC in such a way that a transiently stable dynamic equilibrium, a so-called attractor state characterised by long-range coherence, is established in which the involved field modes enter into a phase-locked coupling. (Right) Cortical column coherence.  

Keppler (2023) also proposes a model where long-range coherence is developed in the functioning of cortical microcolumns, based on the interaction of a pool of glutamate molecules, with the vacuum fluctuations of the electromagnetic field, involving a phase transition from an ensemble of initially independent molecules toward a coherent state, resulting in the formation of a coherence domain that extends across the full width of a microcolumn.

 

John Archibald Wheeler put it greatly, along these lines: The physics of the vacuum is the real physics – the rest are trivial aspects (Menas Kafatos).

 

However he becomes entrapped in a dual aspect monism view similar to mind-brain identity theories where subjective and objective aspects are just dual reflections of one another. In personal communication in 2024, he says:

 

The crucial point is that the collective behavior of the brain constituents, orchestrated by the ZPF, leads to a new level of ZPF-DAS that goes beyond the level of the individual DAS of the brain constituents." In other words, the formation and integration of complex conscious states takes place in the ZPF, not in the brain! ... The brain-ZPF interaction is always localized in the brain. More precisely, resonant brain-ZPF (neurotransmitter-ZPF) coupling can only occur under special conditions in the cortical microcolumns of a person's brain [all this is explained at length in the paper]. Accordingly, the modification of the ZPF (i.e., the amplification of specific ZPF modes) resulting from the resonant interaction takes place locally in the brain of this particular person. This is the reason why the excited phenomenal qualities (associated with the amplified ZPF modes) are intimately connected with this person, leading to the generation of a private, subjective conscious experience. This privacy is due to the local character of the interaction mechanism that encapsulates the generated conscious state from the environment. So, we are dealing with a global, ubiquitous, undifferentiated (dual-aspect) ZPF that harbors local, individual, differentiated (dual-aspect) subjects, which can be thought of as islands of concrete conscious experiences in a vast ocean of undifferentiated consciousness. And, yes, in this way the ZPF could potentially act as memory reservoir.

 

If we accept the ZPF as the quantum vacuum, it "interacts" with absolutely everything, but not in the usual sense we mean interaction and certainly not through a resonance, which is a property of an active positive energy electric field. For example, in quantum electrodynamics (QED) the magnetic moment of an electron is caused by it emitting and absorbing the same photon in an infinite number of ways. This is precisely the electron "interacting" with the quantum vacuum. The whole brain thus consists of molecules interacting with the quantum vacuum. There is nothing special about glutamate. You can set up a resonant system but you can't confine the quantum vacuum by saying it only interacts in cortical columns in a specific way. Claiming it is dual aspect adds another confound to the difficulty and doesn't solve or explain anything and it is epiphenomenalistic because it doesn't explain conscious intentional behaviour. This places a devastating constraint, that subjectivity and physicality have to be stitched together as duals from the bottom up. This simply isn't consistent with nature or experience as it stands. There is nothing about what the ZPF is actually like that suggests it involves a process capable of forming complete subjective experiences. It's simply uncertainty manifest in virtual particle fluctuations.

 

I see this approach as posing duality so that the physical quantum vacuum can be thought to be subjective as well. In a way SEC treats quantum subjectivity as a manifestation of the quantum vacuum, but it's a more complex one. Uncertainty is all about type 1 processes. Otherwise we just have a Hamiltonian process 2. So the quantum vacuum is only one foundation manifestation of uncertainty. By placing reliance on the field, you seem to be forcing yourself into dual aspect because you think the field itself is displaying consciousness, but there is no process in the quantum vacuum to support it. Also it doesn't establish anything new over physicality because the duality is bijective and has no additional information due to the subjective aspect being powerless to affect the physical aspect.

 

Here is what I think the QVF (quantum vacuum field) looks like. Symmetry-breaking means the asymmetric forces of nature we experience are at a lower polarised vacuum energy so the QVF emits those virtual particles we see e.g. in QED, but briefly even QED has to be emitting and absorbing virtual quarks etc. from the other forces but within the extremely small discrepancies we see in Feynman's QED calculations. In the inflationary phase at the higher vacuum energy the universe was in, things would look different and fundamental force unification would take place, so the strong forces would look relatively weaker and more commonplace, ultimately leading to the pure quantum fluctuations that set off the inflationary phase. But it's all the same quantum vacuum governed by uncertainty. So we are looking right back into the cosmic origin, but looking through a telescope backwards, so the primal phenomena have vanishingly small incidence over Planck intervals. This is not synonymous with conscious experience!

 

The dissipative quantum model of brain dynamics (Freeman W & Vitiello 2006, 2007, 2016, Capolupo A, Freeman & Vitiello 2013, Vitiello 2015, Sabbadini & Vitiello 2019) provides another field theoretic description.

 

Karl Pribram (2004) has noted both the similarity of wave coherence interactions as an analogy or manifestation of quantum measurement and the ‘holographic’ nature of wave potential fluctuations, in the dendritic web:

 

The holonomic brain theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm. Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These wave oscillations create interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform. Gabor, Pribram and others noted the similarities between these and the storage of information in a hologram, which can also be analyzed with a Fourier transform.

 

Coherent wave excitation of pyramidal action potentials has been cited as the basis of correlated potentiation of the glymphatic flow through the brain parenchyma during NREM and particularly REM sleep (Jiang-Xie et al. 2024).

 

The dissipative quantum model of brain dynamics (Freeman W & Vitiello 2006, 2007, 2016, Capolupo A, Freeman & Vitiello 2013, Vitiello 2015, Sabbadini & Vitiello 2019) provides another field theoretic description. I include a shortened extract from Freeman & Vitiello (2015), which highlights to me the most outstanding field theoretic description of the neural correlate of consciousness I know of, which also has the support of Freeman’s dynamical attractor dynamics as illustrated in fig 78, and likewise has similar time dual properties to the transactional interpretation discussed above, invoking complementary time directed-roles of  emergence and imagination:

 

Fig 81: Molecular biology is a theme and variations on the polar and non-polar properties of organic molecules residing in an aqueous environment. Nucleotide double helices, protein folding and micelle structures, as well as membranes, are all energetically maintained by their surrounding aqueous structures. Water has one of the highest specific heats of all, because of the large number of internal dynamic quantum states. Myoglobin (Mb) the oxygen transporting protein in muscle, containing a heme active site illustrates this (Ansari et al. 1984), both in its functionally important movements (fim) and its equilibrium fluctuations invoking fractal energetics between it high and low energy states of Mb and MbCO. This activity in turn is stabilised both by non polar side chains maintaining the aqueous structure and polar side chains interacting with the aqueous environment to form water hydration structures (top left) The hydration shell of myoglobin (blue surface) with 1911 water molecules (CPK model), the approximate number needed for optimal function (Vajda & Perczel 2014). Lower: Here we show that molecules taking part in biochemical processes from small molecules to proteins are critical quantum mechanically. Electronic Hamiltonians of biomolecules are tuned exactly to the critical point of the metal-insulator transition separating the Anderson localized insulator phase from the conducting disordered metal phase. Left: The HOMO/LUMO orbitals for Myoglobin calculated with the Extended Hückel method. Right: Generalized fractal dimensions Dq of the wave functions (Vattay et al. 2015).

 

We began by using classical physics to model the dendritic integration of cortical dynamics with differential equations, ranging in complexity from single positive loops in memory through to simulated intentional behavior (Kozma and Freeman 2009). We identified the desired candidate form in a discrete electrochemical wave packet embedded in the electroencephalogram (EEG), often with the form of a vortex like a hurricane, which carried a spatial pattern of amplitude modulation (AM) that qualified as a candidate for thought content.

 

Measurement of scalp EEG in humans (showed that the size and speed of the formation of wave packets were too big to be attributed to the classical neurophysiology of neural networks, so we explored quantum approaches. In order to use dissipative quantum field theory it is necessary to include the impact of brain and body on the environment. Physicists do this conceptually and formally by doubling the variables (Vitiello 1995, 2001, Freeman and Vitiello 2006) that describe dendritic integration in the action-perception cycle.  By doing so they cre- ate a Double, and then integrate the equations in reverse time, so that every source and sink for the brain-body is matched by a sink or source for the Double, together creating a closed system.

 

Fig 82: Field theory model of inward projecting electromagnetic fields
overlapping in basal brain centres (MacIver 2022).

 

On convergence to the attractor the neural activity in each sensory cortex condenses from a gas-like regime of sparse, disordered firing of action potentials at random intervals to a liquid-like macroscopic field of collective activity. The microscopic pulses still occur at irregular intervals, but the probability of firing is no longer random. The neural mass oscillates at the group frequencies, to which the pulses conform in a type of time multiplexing.  The EEG or ECoG (electrocorticogram) scalar field during the liquid phase revealed a burst of beta or gamma oscillation we denoted as a wave packet. Its AM patterns provided the neural correlates of perception and action. The surface grain inferred that the information capacity of wave packets is very high. The intense electrochemical energy of the fields was provided everywhere by the pre-existing trans-membrane ionic concentration gradients.

 

The theory cites water molecules and the cytosol as the basis for the quantum field description, a position supported at the molecular level by the polarisation of the cytoplasmic medium and all its constituents between aqueous polar and hydrophobic non-polar energetics as illustrated in fig 81.

 

Neurons, glia cells and other physiological units are [treated as] classical objects. The quantum degrees of freedom of the model are associated to the dynamics of the electrical dipoles of the molecules of the basic components of the system, i.e. biomolecules and molecules of the water matrix in which they are embedded. The coherence of the long-range correlations is of the kind described by quantum field theory in a large number of physical systems, in the standard model of particle physics as well as in condensed matter physics, ranging from crystals to magnets, from superconductive metals to superfluids. The coherent states characterizing such systems are stable in a wide range of temperatures.

 

In physiological terms the field consists of heightened ephaptic [0] excitability in an interactive region of neuropil, which creates a dominant focus by which every neuron is sensitized, and to which every neuron contributes its remembrance. In physical terms, the dynamical output of the many-body interaction of the vibrational quanta of the electric dipoles of water molecules and other biomolecules energize the neuropil, the densely compartmentalized tissue of axons, dendrites and glia through which neurons force ionic currents. The boson condensation provides the long-range coherence, which in turn allows and facilitates synaptic communication among neuron populations.

 

The stages of activation of the quantum field boson condensation correspond closely to stages of the Freeman attractor dynamics investigated empirically in the EEG and ECoG:

 

We conceive each action-perception cycle as having three stages, each with its neurodynamics and its psychodynamics (Freeman 2015). Each stage has at least one phase transition and may have two or more before the next stage. In the first stage a boson condensation forms a gamma wave packet by a phase transition in each of the primary sensory cortices. Only in stage one a phase transition would occur in a single cortex. In stage two the entorhinal cortex integrates all modalities before making a gestalt.

When the boson condensation carrying its AM pattern invades and recruits the amygdala and hypothalamus, we propose that this correlates with awareness of emotion and value with incipient awareness of content. In the second stage a more extended boson condensation forms a larger wave packet in the beta range that extends through the entire limbic system including the entorhinal cortex, which is central in an AM pattern. We believe it correlates with a flash memory unifying the multiple primary percepts into a gestalt, for which the time and place of the subject forming the gestalt are provided by the hippocampus. A third phase transition forms a boson condensation that sustains a global AM pattern, the manifestations of which in the EEG extend over the whole scalp. We propose that the global AM pattern is accompanied by comprehension of the stimulus meaning, which constitutes an up-to-date status summary as the basis for the next intended action.

 

The dual time representation of the quantum field and its double invokes the key innovative and anticipatory features of conscious imagination:

 

Open systems require an environment to provide the sink where their waste energy goes, and a source of free energy which feeds them. From the standpoint of the energy flux balance, brains describe the relevant restructured part of the environment using the time-reversed copy of the system, its complement or Double (Vitiello 2001).  Where do the hypotheses come from? The answer is: from imagination. In theory the best sources for hypotheses are not memories as they appear in experience, but images mirrored backward in time. The imaginings are not constrained by thermodynamics. The mirror sinks and sources are imagined, not emergent. From this asymmetry we infer that the mirror copy exists as a dynamical system of nerve energy, by which the Double produces its hypotheses and predictions, which we experience as perception, and which we test by taking action. It is the Double that imagines the world outside, free from the shackles of thermodynamic reality. It is the Double that soars.

 

Johnjoe Mcfadden (2020) likewise has a theory of consciousness associated with the electromagnetic wave properties of the brain’s EM field interacting with the matter properties of “unconscious” neuronal processing. In his own words he summarises his theory as follows:

 

I describe the conscious electromagnetic information (cemi) field theory which has proposed that consciousness is physically integrated, and causally active, information encoded in the brain’s global electromagnetic (EM) field. I here extend the theory to argue that consciousness implements algorithms in space, rather than time, within the brain’s EM field. I describe how the cemi field theory accounts for most observed features of consciousness and describe recent experimental support for the theory.  … The cemi field theory differs from some other field theories of consciousness in that it proposes that consciousness — as the brain’s EM field — has outputs as well as inputs. In the theory, the brain’s endogenous EM field influences brain activity in a feedback loop (note that, despite its ‘free’ adjective, the cemi field’s proposed influence is entirely causal acting on voltage-gated ion channels in neuronal membranes to trigger neural firing.

 

The lack of correlation between complexity of information integration and conscious thought is also apparent in the common-place observation that tasks that must surely require a massive degree of information integration, such as the locomotory actions needed to run across a rugged terrain, may be performed without awareness but simple sensory inputs, such as stubbing your toe, will over-ride your conscious thoughts. The cemi field theory proposes that the non-conscious neural processing involves temporal (computational) integration whereas operations, such as natural language comprehension, require the simultaneous spatial integration provided by the cemi field. … Dehaene (2014) has recently described four key signatures of consciousness: (i) a sudden ignition of parietal and prefrontal circuits; (ii) a slow P3 wave in EEG; (iii) a late and sudden burst of high-frequency oscillations; and (iv) exchange of bidirectional and synchronized messages over long distances in the cortex. It is notable that the only feature common to each of these signatures—aspects of what Dehaene calls a ‘global ignition’ or ‘avalanche’—is large endogenous EM field perturbations in the brain, entirely consistent with the cemi field theory.

 

Jones & Hunt (2023) provide a wide-ranging review of field theories of consciousness culminating in their own favoured theory, combining a panpsychist view concordant with Symbiotic Existential Cosmology although specifically dependent on EM fields as it’s key interface. They begin with a critical review of neuronal network approaches to conscious brain function:

 

Neuroscientists usually explain how our different sensory qualia arise in terms of specialized labeled lines with their own detector fibers and processing areas for taste, vision, and other sensory modes. Photoreceptors thus produce color qualia regardless of whether they are stimulated by light, pressure, or other stimuli. This method is supplemented by detailed comparisons of the fibers within each labeled line. For example, the three color fibers overlap in their response to short, medium, and long wavelengths of incoming light. So across-fiber comparisons of their firing rates help disambiguate which wavelengths are actually present. This longstanding view has arisen from various historical roots. But the overall problem is that these operations are so similar in the visual, tactile, and other sensory modes that it is unclear how these methods can differ enough to account for all the stark differences between color and taste qualia, for example. Another issue (which will be addressed more below) concerns the hard problemof why this biological information processing is accompanied by any conscious experience of colors, pains, et cetera.

 

It might be thought that recently proposed neuron-based neuroscientific theories of consciousness would offer more viable accounts of how different qualia arise. But they rarely do. For example, Global Neuronal Workspace Theory GNWT (e.g., Dehaene and Naccache, 2001; Dehaene, 2014) and Higher-Order Theories (e.g., Rosenthal, 2005) focus on access consciousnessthe availability of information for acting, speaking, and reasoning. This access involves attention and thought. But these higher cognitive levels do not do justice to qualia, for qualia appear even at the very lowest levels of conscious cognition in pre-attentive iconic images.

 

They then explore both integrated information theory and quantum approaches such as Hameroff Penrose, illustrating their limitations:

 

Integrated Information Theory represents qualia information abstractly and geometrically in the form of a system’s “qualia space(Tononi 2008). This is the space where each axis represents a possible state of the systema single combination of logic-gate interactions (typically involving synapses). .. IITs accounts of qualia spaces are far too complex to specify except in the simplest of cases, and no tests for this method of characterizing qualia has yet been proposed, as far as we are aware.

 

Hameroff and Penrose have not yet addressed how different qualia arise from different quantum states. This latter issue applies to many quantum theories of consciousness. They generally omit mention of how quantum states yield the primary sensory qualia (redness, sweetness, etc.) we are familiar with. For example, Beshkar (2020) contains an interesting QBIT theory of consciousness that attributes qualia to quantum information encoded in maximally entangled states. Yet this information ultimately gets its actual blueness, painfulness, etc. from higher cortical mechanisms criticized above. Another example is Lewtas (2017). He also attributes our primary qualia to quantum levels. Each fundamental particle has some of these various qualia. Synchronized firing by neurons at different frequencies selects from the qualia and binds them to form images. ... The general problem with these highly philosophical qualia theories is that they are hard to evaluate. Their uniting of qualia to quanta is not spelt out in testable detail.

 

They then outline the difficulties network based neuroscience has dealing with qualia:

 

Standard neuroscience has not explained well how the brains separate, distributed visual circuits bind together to support a unified image. This is an aspect of the so-called binding problemof how the minds unity arises ... visual processing uses separate, parallel circuits for color and shape, and it is unclear how these circuits combine to form complete images. Ascending color and shape circuits have few if any synapses for linking their neurons to create colored shapes. Nor do they converge on any central visual area.

 

(1) The coding/correlation problem: As argued above, the neuronal and computational accounts above have failed to find different information-processing operations among neurons that encode our different qualia.

(2) The qualia-integration problem: Computational accounts also face the problem of explaining how myriad qualia are integrated together to produce overall unified perceptions such as visual images.

(3) The hard problem: In addition to the two empirical problems above, computational accounts face a hard, metaphysical problem. Why are neural events accompanied by any qualia at all?

 

They then explore how field theories can address these fundamental issues:

 

EM field approaches to minds have offered new theories of qualia and consciousness, some of which are testable. These electromagnetic approaches seat consciousness primarily in the various complex EM fields generated by neurons, glia and the rest of the brain and body. ... These EM field approaches are proliferating because they draw on considerable experimental evidence and withstand past criticisms from standard neuroscience. For example, they have explained the unity of consciousness in terms of the physical unity (by definition) of EM fieldsin contrast to the discrete nature of neurons and their synaptic firing. In the last two decades, they have also offered explanations of how neural EM activity creates different qualia.

 

Pocketts (2000) theory of qualia is an important landmark in EM field theories of mind. It is rooted in extensive experimental evidence, makes testable predictions, and is strongly defended against critics. If Kohler, Libet, Eccles, and Popper helped establish the EM field approach to minds, Susan Pockett has arguably done more to develop it than anyone elseexcept for perhaps Johnjoe McFadden.  … Pocketts basic claim is that consciousness is identical with certain spatiotemporal patterns in the electromagnetic field” (ibid., pp. vi, 109, 136–7). Her evidence comes mainly from extensive EEG and MEG studies of neural electromagnetic fields. They show correlations between sensory qualia and field patterns. For example, EEG studies by Freeman (1991) show that various odors (e.g., from bananas or sawdust) correlate with specific spatial patterns distributed across mammalian olfactory areas.

 

McFaddens (202b) theory says that information is conscious at all levels, which seems to entail a form of panpsychism (McFadden, 2002b). The discreteconsciousness of elementary particles is limited and isolated. But as particles join into a field, they form a unified fieldconsciousness. As these fields affect motor neurons, the brains consciousness is no longer an epiphenomenon, for its volition can communicate with the world. This level of accessconsciousness serves as a global workspace where specialized processors compete for access to volitions global, conscious processes. McFadden rejects popular views that minds are just ineffectual epiphenomena of brain activity. Instead, fieldnerve interactions are the basis of free will. The conscious field is deterministic, yet it is free in that it affects behavior instead of being epiphenomenal (McFadden, 2002a,b). This treats determinism as compatible with free will construed as self-determination.

 

They postpone the hard problem and focus on the first two above:

 

(1) The coding/correlation problem: What different EM-field activities encode or correlate with the various qualia? Both field theories above face difficulties here.

(2) The qualia-integration problem: How do EM fields integrate myriad qualia to form (for example) unified pictorial images? Here field theories seem quite promising in their ability to improve upon standard neuroscience.

 

They then cite three emergent field theories which have sought to address the outstanding problems faced by the field theories already discussed:

 

Ward and Guevara (2022) localize qualia in the fields generated by a particular part of the brain. Their intriguing thesis is that our consciousness and its qualia are based primarily on structures in thalamic EM fields which serve to model environmental and bodily information in ways relevant to controlling action. Ward and Guevara argue that the physical substrate of consciousness is limited to strong neural EM fields where synchronously firing neurons reinforce each others information in a manner which is also integrated and complex. Finally, local, nonsynchronous fields can be canceled out in favor of a dominant field that synchronously and coherently represents all the information from our senses, memories, emotions, et cetera. For these reasons, Ward and Guevara believe that fields are better candidates than neurons and synaptic firing for the primary substrate of consciousness. … they cite four reasons for ascribing consciousness to the thalamus. (1) We are not conscious of all sensory computations, just their end result, which involves the thalamic dynamic core. (2) Thalamic dysfunctions (but not necessarily cortical dysfunctions) are deeply involved in nonconsciousness conditions such as anesthesia, unresponsive wakefulness syndrome, and anoxia. (3) The thalamus is a prime source and controller of synchronization (in itself and in cortex), which is also associated with consciousness. (4) The thalamus (especially its DM nucleus Ouhaz et al. 2018) is ideally suited for the integrative role associated with consciousness, for cortical feedbacks seem to download cortical computations into thalamus. ... These lines of evidence indicate that while cortex computes qualia, thalamus displays qualia.

 

Another author who attributes qualia to fundamental EM activity is Bond (2023). This clear, succinct paper explains that quantum coherence involves the entanglement of quanta within energy fields, including the EM fields generated by neurons. Neural matter typically lacks this coherence because the haphazard orientation of quantum spins in the matter creates destructive interference and decoherence. Bond proposes the novel idea that firing neurons generate EM fields that can flow through nearby molecular structures and entangle with their atoms. This coherence produces our perceptions. The different subjective feelings of these perceptions come from different hybrids or mixtures of the fieldswavelengths as they vibrate or resonate. ... On a larger scale, this coherence ties into the well-known phase- locking of corticothalamic feedback loops. Together, they produce the holism or unity of consciousness. This combination of coherent, phase-locked feedback loops and coherent, entangled wave-particles in EM fields is called by Bond a coherence field.It is investigated by his Coherence Field Theory (CFT).

 

Finally, as joint authors, they elucidate their favoured theory General Resonance Theory, or GRT arising from their independent research:

 

Another approach to the Qualia Problem is Hunt and Schoolers General Resonance Theory (GRT), which is grounded in a panpsychist framework. GRT assumes that all matter is associated with at least some capacity for phenomenal consciousness (this is called the “panpsychism axiom”), but that consciousness is extremely rudimentary in the vast majority of cases due to a lack of physical complexity mirrored by the lack of mental complexity. The EM fields associated with all baryonic matter (i.e., charged particles) are thought to be the primary seat of consciousness simply because EM fields are the primary force at the scale of life (strong and weak nuclear fields are operative at scales far smaller and gravity is operative mostly at scales far larger). Accordingly, GRT is applicable to all physical structures and as a theory is not limited only to neurobiological or even biological structures (Hunt and Schooler, 2019).

 

GRT suggests that resonance (similar but not synonymous with synchronization and coherence) of various types is the key mechanism by which the basic constituents of consciousness, when in sufficient proximity, combine into more complex types of consciousness. This is the case because shared resonance allows for phase transitions in the speed and bandwidth of information exchange to occur at various organizational levels, allowing previously disordered systems to self-organize and thus become coherent by freely sharing information and energy.

 

Qualia, in GRT, are synonymous with consciousness, which is simply subjective experience:

 

Jones (2017, 2019), a coauthor of the current paper, has developed an EM-field theory of qualia. Like other field theories, it attributes qualia and images to neural EM-field patterns (and probably the EM-charged matter emitting the fields). Yet these are not the coded images of computational field theories that are based on information processing. Instead, in his theory images actually reside in conscious, pictorial form within the EM fields of neural maps. This is a neuroelectrical, pure panpsychist theory of mind (NP). The pure panpsychismsays that everything (not just EM) is comprised purely of consciousness. NP addresses the hard problem, qualia-integration problem, and qualia coding/ correlation problem in the following ways.

 

(1) The hard problem: How are qualia metaphysically related to brains and computations? In NP, consciousness and its qualia are the hidden nature of observable matter and energy. We are directly aware of our inner conscious thoughts and feelings. Yet we are just indirectly aware of the observable, external world through reflected light, instruments, sense organs, et cetera.

 

(2) The qualia coding/correlation problem: How do our various qualia arise? Yet there is now growing evidence that different qualia correlate with different electrically active substances in cellular membranes found in sensory and emotional circuits. These substances are the membranesion-channel proteins and associated G-protein-coupled receptors (GPCRs). For example, the different primary colors correlate with different OPN1 GPCRs ...  oxytocin and vasopressin receptor proteins correlate with feelings of love, estrogen and testosterone receptors correlate with lust, the endorphin receptor correlates with euphoria, and the adrenaline receptor correlates with vigilance.

 

(3) The qualia-integration problem: First, how do various qualia unify together into an overall whole? Second, how specifically do qualia join point by point to form pictorial images? In NPs field theory, active circuits create a continuous EM field between neurons that pools their separate, atomized consciousness. This creates a unified conscious mind along brain circuits (with the mind itself residing in the field and perhaps in the charged matter creating the field). This unity is strongest around the diffuse ion currents that run along (and even between) neuronal circuits. It is very strong among well-aligned cortical cells that fire together coherently.

 

In conclusion they state: Consciousness is characterized mainly by its privately experienced qualities (qualia). Standard, computation-based and synapse-based neuroscience have serious difficulties explaining them. ... field theories have improved in key ways upon standard neuroscience in explaining qualia. But this progress is sometimes tentativeit awaits further evidence and development. 

 

Earlier John Eccles (1986) proposed a brain mind identity theory involving psychon quasi-particles mediating uncertainty of  synaptic transmission to complementary dendrons cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunnelling effect in synaptic exocytosis, while in perception the reverse process takes place. This model has been elaborated by a number of researchers (Eccles 1990, 1994, Beck & Eccles 1992, Georgiev 2002, Hari 2008). The difficulty with the theory is that the psychons are then physical quasi-particles with integrative mental properties. So it’s a contradictory description that doesn’t manifest subjectivity except by its integrative physical properties.

 

Summarising the state of play, we have two manifestations of consciousness at the interface with objective physical description, (a) the hard problem of consciousness and (b) the problem of quantum measurement, both of which are in continual debate. Together these provide complementary windows on the abyss in the scientific description and a complete solution of existential cosmology that we shall explore in this article.

   

Neural Nets versus Biological Brains

 

Steven Grossberg is recognised for his contribution to ideas using nonlinear systems of differential equations such as laminar computing, where the layered cortical structures of mammalian brains provide selective advantages, and for complementary computing, which concerns the idea that pairs of parallel cortical processing streams compute complementary properties in the brain, each stream having complementary computational strengths and weaknesses, analogous to physical complementarity in the uncertainty principle. Each can possess multiple processing stages realising a hierarchical resolution of “uncertainty”, which here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage.

 

“Conscious Mind, Resonant Brain” (Grossberg 2021) provides a panoramic model of the brain, from neural networks to network representations of conscious brain states. In so doing, he presents a view based on resonant non-linear systems, which he calls adaptive resonance theory (ART), in which a subset of “resonant” brain states are associated with conscious experiences. While I applaud his use of non-linear dynamics, ART is a structural abstract neural network model and not what I as a mathematical dynamicist conceive of as "resonance", compared with the more realistic GNW, or global neuronal workspace model.

 

The primary intuition behind the ART model is that object identification and recognition generally occur as a result of the interaction of 'top-down' observer expectations with 'bottom-up' sensory information. The model postulates that 'top-down' expectations take the form of a memory template or prototype that is then compared with the actual features of an object as detected by the senses. This comparison gives rise to a measure of category belongingness. As long as this difference between sensation and expectation does not exceed a set threshold called the 'vigilance parameter', the sensed object will be considered a member of the expected class. The system thus offers a solution to the 'plasticity/stability' problem, i.e. the problem of acquiring new knowledge without disrupting existing knowledge that is also called incremental learning.

 

The basic ART structure.

 

The work shows in detail how and why multiple processing stages are needed before the brain can construct a complete and stable enough representation of the information in the world with which to predict environmental challenges and thus control effective behaviours. Complementary computing and hierarchical resolution of uncertainty overcome these problems until perceptual representations that are sufficiently complete, context-sensitive, and stable can be formed. The brain regions where these representations are completed are different for seeing, hearing, feeling, and knowing.

 

His proposed answer is that a resonant state is generated that selectively lights upthese representations and thereby renders them conscious. These conscious representations can then be used to trigger effective behaviours:

 

My proposed answer is: A resonant state is generated that selectively lights upthese representations and thereby renders them conscious. These conscious representations can then be used to trigger effective behaviors. Consciousness hereby enables our brains to prevent the noisy and ambiguous information that is computed at earlier processing stages from triggering actions that could lead to disastrous consequences. Conscious states thus provide an extra degree of freedom whereby the brain ensures that its interactions with the environment, whether external or internal, are as effective as possible, given the information at hand.

 

He addresses the hard problem of consciousness in its varying aspects:

 

As Chalmers (1995) has noted: The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. ... Even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? There seems to be an unbridgeable explanatory gap between the physical world and consciousness. All these factors make the hard problem hard. … Philosophers vary passionately in their views between the claim that no Hard Problem remains once it is explained how the brain generates experience, as in the writings of Daniel Dennett, to the claim that it cannot in principle be solved by the scientific method, as in the writings of David Chalmers. See the above reference for a good summary of these opinions.

 

Grossberg demonstrates that, over and above information processing, our brains sometimes go into a context-sensitive resonant state that can involve multiple brain regions. He explores experimental evidence that all conscious states are resonant states but not vice versa. Showing that, since not all brain dynamics are resonant, consciousness is not just a whir of information-processing:

 

When does a resonant state embody a conscious experience? Why is it conscious? And how do different resonant states support different kinds of conscious qualia? The other side of the coin is equally important: When does a resonant state fail to embody a conscious experience? Advanced brains have evolved in response to various evolutionary challenges in order to adapt to changing environments in real time. ART explains how consciousness enables such brains to better adapt to the worlds changing demands.

 

Grossberg is realistic about the limits on a scientific explanation of the hard problem:

 

It is important to ask: How far can any scientific theory go towards solving the Hard Problem? Let us suppose that a theory exists whose neural mechanisms interact to generate dynamical states with properties that mimic the parametric properties of the individual qualia that we consciously experience, notably the spatio-temporal patterning and dynamics of the resonant neural representations that represent these qualia. Suppose that these resonant dynamical states, in addition to mirroring properties of subjective reports of these qualia, predict properties of these experiences that are confirmed by psychological and noninvasive neurobiological experiments on humans, and are consistent with psychological, multiple-electrode neurophysiological data, and other types of neurobiological data that are collected from monkeys who experience the same stimulus conditions.

 

He then develops a strategy to move beyond the notion of the neural correlate of consciousness (Crick & Koch 1990), claiming these states are actually the physical manifestation of the conscious state:

 

Given such detailed correspondences with experienced qualia and multiple types of data, it can be argued that these dynamical resonant states are not just neural correlates of consciousnessthat various authors have also discussed, notably David Chalmers and Christof Koch and their colleagues. Rather, they are mechanistic representations of the qualia that embody individual conscious experiences on the psychological level. If such a correspondence between detailed brain representations and detailed properties of conscious qualia occurs for a sufficiently large body of psychological data, then it would provide strong evidence that these brain representations create and support these conscious experiences. A theory of this kind would have provided a linking hypothesis between brain dynamics and the conscious mind. Such a linking hypothesis between brain and mind must be demonstrated before one can claim to have a theory of consciousness”.

 

However he then delineates the claim that this is the most compete scientific account of subjective experience possible, while conceding that it may point to a  cosmological problem akin those in relativity and quantum theory:

 

If, despite such a linking hypothesis, a philosopher or scientist claims that, unless one can see red” or “feel fearin a theory of the Hard Problem, then it does not contribute to solving that problem, then no scientific theory can ever hope to solve the Hard Problem. This is true because science as we know it cannot do more than to provide a mechanistic theoretical description of the dynamical events that occur when individual conscious qualia are experienced. However, as such a principled, albeit incrementally developing, theory of consciousness becomes available, including increasingly detailed psychological, neurobiological, and even biochemical processes in its explanations, it can dramatically shift the focus of discussions about consciousness, just as relativity theory transformed discussions of space and time, and quantum theory of how matter works. As in quantum theory, there are measurement limitations in understanding our brains.

 

Although he conceives of brain dynamics as being poised just above the level of quantum effects in vision and hearing, Grossberg sees brains as a new frontier of scientific discovery subject to the same principles of complementarity and uncertainty as arise in quantum physics:

 

Since brains form part of the physical world, and interact ceaselessly with it to adapt to environmental challenges, it is perhaps not surprising that brains also obey principles of complementarity and uncertainty. Indeed, each brain is a measurement device for recording and analyzing events in the physical world. In fact, the human brain can detect even small numbers of the photons that give rise to percepts of light, and is tuned just above the noise level of phonons that give rise to percepts of sound.

 

Complementarity and uncertainty principles also arise in physics, notably in quantum mechanics. Since brains form part of the physical world, and interact ceaselessly with it to adapt to environmental challenges, it is perhaps not surprising that brains also obey principles of complementarity and uncertainty. Indeed, each brain is a measurement device for recording and analyzing events in the physical world. In fact, the human brain can detect even small numbers of the photons that give rise to percepts of light, and is tuned just above the noise level of phonons that give rise to percepts of sound.

 

The Uncertainty Principle identified complementary variables, such as the position and momentum of a particle, that could not both be measured with perfect precision. In all of these theories, however, the measurer who was initiating and recording measurements remained out- side the measurement process. When we try to understand the brain, this is no longer possible. The brain is the measurement device, and the process of understanding mind and brain is the study of how brains measure the world. The measurement process is hereby brought into physical theory to an unprecedented degree.

 

Fig 83: Brain centres involved in intentional behaviour and subjectively conscious physical volition: (a) The cortex overlaying the basal ganglia, thalamus and amygala and substantia nigra  involved in planned action, motivation and volition. (b) The interactive circuits in the cortex, striatum and thalamus facilitating intentional motor bahaviour. (c) The Motivator model clarifies how the basal ganglia and amygdala coordinate their complementary functions in the learning and performance of motivated acts. Brain areas can be divided into four regions that process information about conditioned stimuli (CSs) and unconditioned stimuli (USs). (a) Object Categories represent visual or gustatory inputs, in anterior inferotemporal (ITA) and rhinal (RHIN) cortices; (b) Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH); (c) Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex; and (d) the Reward Expectation Filter in the basal ganglia detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomes of the striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the SNc/VTA (substantia nigra pars compacta/ventral tegmental area). The network model connecting brain regions is consistent with both quantum and classical approaches and in no way eliminates subjective conscious volition from having an autonomous role. All it implies is that conscious volition arises from an evolved basis in these circuit relationships in mammals.

 

Grossberg sees the brain as presenting new issues for science as measurement devices confounding their separation between measured effect and the observer making a quantum measurement:

 

Since brains are also universal measurement devices, how do they differ from these more classical physical ideas? I believe that it is the brains ability to rapidly self-organize, through development and life-long learning, that sets it apart from previous physical theories. The brain thus represents a new frontier in measurement theory for the physical sciences, no less than the biological sciences. It remains to be seen how physical theories will develop to increasingly incorporate concepts about the self-organization of matter, and how these theories will be related to the special case of brain self-organization.

 

Experimental and theoretical evidence will be summarized in several chapters in support of the hypothesis that principles of complementarity and uncertainty that are realized within processing streams, better explain the brains functional organization than concepts about independent modules. Given this conclusion, we need to ask: If the brain and the physical world are both organized according to such principles, then in what way is the brain different from the types of physical theories that are already well-known? Why havent good theoretical physicists already solvedthe brain using known physical theories?

 

The brains universal measurement process can be expected to have a comparable impact on future science, once its implications are more broadly understood. Brain dynamics operate, however, above the quantum level, although they do so with remarkable efficiency, responding to just a few photons of light in the dark, and to faint sounds whose amplitude is just above the level of thermal noise in otherwise quiet spaces. Knowing more about how this exquisite tuning arose during evolution could provide important new information about the design of perceptual systems, no less than about how quantum processes interface with processes whose main interactions seem to be macroscopic.

 

In discussing the hierarchical feedback of the cortex and basal ganglia and the limbic system, Grossberg (2015) fluently cites both consciousness and volition as adaptive features of the brain as a self-organising system:

 

The basal ganglia control the gating of all phasic movements, including both eye movements and arm movements. Arm movements, unlike eye movements, can be made at variable speeds that are under volitional basal ganglia control. Arm movements realize the Three Ss of Movement Control; namely, Synergy, Synchrony, and Speed. … Many other brain processes can also be gated by the basal ganglia, whether automatically or through conscious volition. Several of these gating processes seem to regulate whether a top- down process subliminally primes or fully activates its target cells. As noted in Section 5.1, the ART Matching Rule enables the brain to dynamically stabilize learned memories using top-down attentional matching.

 

Such a volitionally-mediated shift enables top-down expectations, even in the absence of supportive bottom-up inputs, to cause conscious experiences of imagery and inner speech, and thereby to enable visual imagery, thinking, and planning activities to occur. Thus, the ability of volitional signals to convert the modulatory top-down priming signals into suprathreshold activations provides a great evolutionary advantage to those who possess it.

 

Such neurosystem models  provide key insights into how processes associated with intentional acts and the reinforcement of sensory experiences through complementary adaptive networks, model the neural correlate of conscious volitional acts and their smooth motor execution in the world at large. As they stand, these are still classical objective models that do not actually invoke conscious volition as experienced, but they do provide deep insight into the brain’s adaptive processes accompanying subjective conscious volition.

 

My critique, which this is clear and simple, is that these designs remove such a high proportion of the key physical principles involved in biological brain function that they can have no hope of modelling subjective consciousness or volition, despite the liberal use of these terms in the network designs, such as the basal ganglia as gateways. Any pure abstract neural net model, however much it adapts to “resonate" with biological systems is missing major fundamental formative physical principles of how brains actually work.

 

These include: 

(A)  The fact that biological neural networks are both biochemical and electrochemical in two ways (1) all electrochemical linkages, apart from gap junctions, work through the mediation of biochemical neurotransmitters and (2) the internal dynamics of individual neurons and glia are biochemical, not electrochemical. 

(B) The fact that the electrochemical signals are dynamic and involve sophisticated properties including both (1) unstable dynamics at the edge of chaos and (2) phase coherence tuning between continuous potential gradients and action potentials. 

(C) They involve both neurons and neuroglia working in complementary relationship. 

(D) They involve developmental processes of cell migration determining the global architecture of the brain including both differentiation by the influence of neurotransmitter type and chaotic excitation in early development.

(E) This neglects the fact that evolution of biological brains as neural networks is built on the excitatory neurotransmitter-driven social signalling and quantum sentience of single celled eucaryotes, forming an intimately coupled society of amoebo-flagellate cells communicating by the same neurotransmitters as in single-celled eucaryotes, so these underlying dynamics are fundamental and essential to biological neural net functionality.

 

Everything from simple molecules such as ATP acting as the energy currency of the cell, through protein folding, to enzymes involve quantum effects, such as tunnelling at active sites, and ion channels are at the same level.

 

It is only a step from there to recognising that such biological processes are actually fractal non-IID (not identically independently-distributed quantum processes, not converging to the classical, in the light of Gallego & Dakić (2021), because their defining contexts are continually evolving, to thus provide a causally open view of brain dynamics, in which the extra degree of freedom provided by consciousness, that complements objective physical computation, arises partly through quantum uncertainty itself, in conscious volition becoming subjectively manifest, and ensuring survival under uncertain environmental threats.

 

However, this is not just a rational or mechanistically causal process. We evolved from generation upon generation of organisms surviving existential threats in the wild, which were predominantly solved by lightning fast hunch and intuition, and never by rational thought alone, except recently and all too briefly in our cultural epoch.

 

The great existential crises have always been about surviving environmental threats which are not only computationally intractable due to exponentiating degrees of freedom, but computationally insoluble because they involve the interaction of live volitional agents, each consciously violating the rules of the game.

 

Conscious volition evolved to enable subjective living agents to make hunch-like predictions of their own survival in contexts where no algorithmic or deterministic process, including the nascent parallelism of the cortex, limbic system and basal ganglia that Steve Grossberg has drawn attention to, could suffice, other than to define boundary conditions on conscious choices of volitional action. Conscious intentional will, given these constraints, remained the critical factor, complementing computational predictivity generated through non-linear dynamics, best predicting survival of a living organism in the quantum universe, which is why we still possess it.

 

When we come to the enigma of subjective conscious anticipation and volition under survival threats, these are clearly, at the physiological level, the most ancient and most strongly conserved. Although the brains of vertebrates, arthropods and cephalopods show vast network differences, the underlying processes generating consciousness remain strongly conserved to the extent that baby spiders display clear REM features during sleep despite having no obvious neural net correspondence. While graded membrane excitation is universal to all eucaryotes and shared by human phagocytes and amoeba, including the genes for the poisons used to kill bacteria, the action potential appears to have evolved only in flagellate eucaryotes, as part of the flagellar escape response to existential threat, later exemplified by the group flagellation of our choano-flagellate ancestor colonies.

 

All brains are thus intimate societies of dynamically-coupled excitable cells (neurons and glia) communicating through these same molecular social signalling pathways that social single celled eucaryotes use. Both strategic intelligence and conscious volition as edge-of-chaos membrane excitation in global feedback thus arose long before brains and network designs emerged.

 

Just as circuit design models can have predictive value, so does subjective conscious volition of the excitable eucaryote cell have clear survival value in evolution and hence predictive power of survival under existential threat, both in terms of arbitrary sensitivity to external stimuli at the quantum level and neurotransmitter generated social decision-making of the collective organism. Thus the basis of what we conceive of as subjective conscious volition is much more ancient and longer and more strongly conserved than any individual network model of the vertebrate brain and underlies all attempts to form realistic network models.

 

Since our cultural emergence, Homo sapiens has been locked in a state of competitive survival against its own individuals, via Machiavellian intelligence, but broadly speaking, rationality – dependence on rational thought processes as a basis for adaption – just brings us closer to the machine learning of robots, rather than conscious volition. Steves representation of the mechanical aspects in the basal ganglia in Grossberg (2015) gives a good representation of how living neurosystems adaptively evolve to make the mechanical aspect of the neural correlate of conscious volition possible, but it says little about how we actually survive the tigers pounce, let alone the ultimate subtleties of human political intrigue, when the computational factor are ambiguous.. Likewise decision theory or prospect theory, as noted in Wikipedia, tells us only a relatively obvious asymmetric sigmoidal function describing how risk aversion helps us survive, essentially because being eaten rates more decisively in the cost stakes than any single square meal as a benefit.

 

Because proving physical causal closure of the universe in the context of brain dynamics is impossible to practically achieve in the quantum universe, physical materialism is itself not a scientific concept, so all attempts to model and understand conscious volition remain open and will continue to do so. The hard problem of consciousness is not a division between science and philosophy as Steve suggests in his (2021) book, but our very oracle of cosmological existence.

 

Epiphenomenalism, Conscious Volition and Free Will

 

Thomas Kuhn (19221996) is perhaps the most influential philosopher of science of the twentieth century. His book “The Structure of Scientific Revolutions” (Kuhn 1962) is one of the most cited academic books of all time.  A particularly important part of Kuhns thesis focuses upon the consensus on exemplary instances of scientific research. These exemplars of good science are what Kuhn refers to when he uses the term paradigmin a narrower sense. He cites Aristotles analysis of motion, Ptolemys computations of plantery positions, Lavoisiers application of the balance, and Maxwells mathematization of the electromagnetic field as paradigms (ibid, 23). According to Kuhn the development of a science is not uniform but has alternating ‘normal’ and revolutionary(or extraordinary) phases in which paradigm shifts occur.

 

Rejecting a teleological view of science progressing towards the truth, Kuhn favours an evolutionary view of scientific progress (1962/1970a, 1703). The evolutionary development of an organism might be seen as its response to a challenge set by its environment. But that does not imply that there is some ideal form of the organism that it is evolving towards. Analogously, science improves by allowing its theories to evolve in response to puzzles and progress is measured by its success in solving those puzzles; it is not measured by its progress towards to an ideal true theory. While evolution does not lead towards ideal organisms, it does lead to greater diversity of kinds of organism. This is the basis of a Kuhnian account of specialisation in science in which the revolutionary new theory that succeeds in replacing another that is subject to crisis, may fail to satisfy all the needs of those working with the earlier theory. One response to this might be for the field to develop two theories, with domains restricted relative to the original theory (one might be the old theory or a version of it).

 

Free will is the notion that we can make real choices which are partially or completely independent of antecedent conditions – "the power of acting without the constraint of necessity or fate; the ability to act at one's own discretion", in the context of the given circumstances. Determinism denies this and maintains that causation is operative in all human affairs. Increasingly, despite the discovery of quantum uncertainty, scientists argue that their discoveries challenge the existence of free will. Studies indicate that informing people about such discoveries can change the degree to which they believe in free will and subtly alter their behaviour, leading to a social erosion of human agency, personal and ethical responsibility.

 

Philosophical analysis of free will divides into two opposing responses. Incompatibilists claim that free will and determinism cannot coexist. Among incompatibilists, metaphysical libertarians, who number among them Descartes, Bishop Berkeley and Kant, argue that humans have free will, and hence deny the truth of determinism. Libertarianism holds onto a concept of free will that requires the agent to be able to take more than one possible course of action under a given set of circumstances, some arguing that indeterminism helps secure free will, others arguing that free will requires a special causal power, agent-causation. Instead, compatibilists argue that free and responsible agency requires the capacities involved in self-reflection and practical deliberation; free will is the ability to make choices based on reasons, along with the opportunity to exercise this ability without undue constraints (Nadelhoffer et al. 2014). This can make rational acts or decisions compatible with determinism.

 

Our concern here is thus not with responsible agency, which may or may not be compatible with determinism, but affirming the existence of agency not causally determined by physical processes in the brain. Epiphenomenalists accept that subjective consciousness exists, as an internal model of reality constructed by the brain to give a global description of the coherent brain processes involved in perception attention and cognition, but deny the volitional will over our actions that is central to both reasoned and creative physical actions. This invokes a serious doubt that materialistic neuroscience can be in any way consistent with any form of consciously conceived ethics, because invoking moral or ethical reasoning is reduced to forms of aversive conditioning, consistent with behaviouralism, and Pavlov’s dogs, subjectively rationalised by the subject as a reason. This places volition as being a delusion driven by evolutionary compensation to mask the futility of any subjective belief in organismic agency over the world.

 

Defending subjective volitional agency thus depends centrally on the innovative ability of the subjective conscious agent to generate actions which lie outside the constraints of determined antecedents, placing a key emphasis on creativity and idiosyncracy, amid physical uncertainty, rather than cognitive rationality, as reasons are themselves subject to antecedents.

 

Bob Doyle notes that in the first two-stage model of free-will, William James (1884) proposed that indeterminism is the source for what James calls "alternative possibilities" and "ambiguous futures." The chance generation of such alternative possibilities for action does not in any way limit his choice to one of them. For James chance is not the direct cause of actions. James makes it clear that it is his choice that grants consentto one of them. In 1884, James asked some Harvard Divinity School students to consider his choice for walking home after his talk:

 

What is meant by saying that my choice of which way to walk home after the lecture is ambiguous and matter of chance?...It means that both Divinity Avenue and Oxford Street are called but only one, and that one either one, shall be chosen.

 

James was thus the first thinker to enunciate clearly a two-stage decision process, with chance in a present time of random alternatives, leading to a choice which grants consent to one possibility and transforms an equivocal ambiguous future into an unalterable and simple past. There is a temporal sequence of undetermined alternative possibilities followed by an adequately determined choice where chance is no longer a factor. James also asked the students to imagine his actions repeated in exactly the same circumstances, a condition which is regarded today as one of the great challenges to libertarian free will. James anticipates much of modern physical theories of multiple universes:

 

Imagine that I first walk through Divinity Avenue, and then imagine that the powers governing the universe annihilate ten minutes of time with all that it contained, and set me back at the door of this hall just as I was before the choice was made. Imagine then that, everything else being the same, I now make a different choice and traverse Oxford Street. You, as passive spectators, look on and see the two alternative universes,--one of them with me walking through Divinity Avenue in it, the other with the same me walking through Oxford Street. Now, if you are determinists you believe one of these universes to have been from eternity impossible: you believe it to have been impossible because of the intrinsic irrationality or accidentality somewhere involved in it. But looking outwardly at these universes, can you say which is the impossible and accidental one, and which the rational and necessary one? I doubt if the most ironclad determinist among you could have the slightest glimmer of light on this point.

 

Henri Poincaré speculated on how his mind worked when solving mathematical problems. He had the critical insight that random combinations and possibilities are generated, some in an unconsciously, then they are selected among, perhaps initially also by an unconscious process, but then by a definite conscious process of validation:

 

It is certain that the combinations which present themselves to the mind in a kind of sudden illumination after a somewhat prolonged period of unconscious work are generally useful and fruitful combinationsall the combinations are formed as a result of the automatic action of the subliminal ego, but those only which are interesting find their way into the field of consciousnessA few only are harmonious, and consequently at once useful and beautiful, and they will be capable of affecting the geometrician's special sensibility I have been speaking of; which, once aroused, will direct our attention upon them, and will thus give them the opportunity of becoming consciousIn the subliminal ego, on the contrary, there reigns what I would call liberty, if one could give this name to the mere absence of discipline and to disorder born of chance.

 

Even reductionist Daniel Dennett, who is a libertarian, has his version of decision-making:

 

The model of decision making I am proposing has the following feature: when we are faced with an important decision, a consideration-generator whose output is to some degree undetermined produces a series of considerations, some of which may of course be immediately rejected as irrelevant by the agent (consciously or unconsciously). Those considerations that are selected by the agent as having a more than negligible bearing on the decision then figure in a reasoning process, and if the agent is in the main reasonable, those considerations ultimately serve as predictors and explicators of the agent's final decision.

 

The Two-Stage Model of Arthur Compton championed the idea of human freedom based on quantum uncertainty and invented the notion of amplification of microscopic quantum events to bring chance into the macroscopic world. Years later, he clarified the two-stage nature of his idea in an Atlantic Monthly article in 1955:

 

A set of known physical conditions is not adequate to specify precisely what a forthcoming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur. That he does so is known only to the person himself. From the outside one can see in his act only the working of physical law. It is the inner knowledge that he is in fact doing what he intends to do that tells the actor himself that he is free.

 

At first Karl Popper dismissed quantum mechanics as being no help with free will, but later describes a two-stage model paralleling Darwinian evolution, with genetic mutations being probabilistic and involving quantum uncertainty.

 

In 1977 he gave the first Darwin Lecture  "Natural Selection and the Emergence of Mind". In it he said he had changed his mind (a rare admission by a philosopher) about two things. First he now thought that natural selection was not a "tautology" that made it an unfalsifiable theory. Second, he had come to accept the random variation and selection of ideas as a model of free will. The selection of a kind of behavior out of a randomly offered repertoire may be an act of choice, even an act of free will. I am an indeterminist; and in discussing indeterminism I have often regretfully pointed out that quantum indeterminacy does not seem to help us;1 for the amplification of something like, say, radioactive disintegration processes would not lead to human action or even animal action, but only to random movements. This is now the leading two-stage model of free will. I have changed my mind on this issue. A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.

 

These accounts span diverse thinkers, from James, through Dennett to Compton who applied quantum uncertainty, so whether you are a materialist or a mentalist you can adapt two process volition to your taste. Therefore it says nothing about the nature of conscious decision making, or the hard problem of volition. The key is that (1) something generates a set of possibilities either randomly or otherwise and (2) the mind/brain chooses one to enact, computationally, rationally or intuitively. Computationalists can say (1) is random and (2) is computational. Quantum mechanics provides for both: (1) is the indeterminacy of collapse in von Neumann process 1 and (2) is the collapsed particle dynamics of the Schrödinger equation aka von Neumann process 2.

 

Symbiotic Existential Cosmology affirms two empirical modes – objective verified empirical observation and subjective affirmed empirical experience, both of which are amenable to the same statistical methods. This ties to the conclusion that subjective conscious volition has efficacy over the physical universe and to the refutation of pure physicalism because causal closure of the physical universe is unprovable but empirical experience of our subjectively conscious actions towards our own physical survival clearly affirm we have voluntary conscious volition having physical effect.

 

Benjamin Libet has become notorious for his readiness potential suggesting consciousness has no physical effect but his statement on free will precisely echoes Symbiotic Existential Cosmology with exactly the same ethical emphasis:

 

Given the speculative nature of both determinist and non-determinist theories, why not adopt the view that we do have free will (until some real contradictory evidence may appear, if it ever does). Such a view would at least allow us to proceed in a way that accepts and accommodates our own deep feeling that we do have free will. We would not need to view ourselves as machines that act in a manner completely controlled by the known physical laws.

 

In Symbiotic Existential Cosmology the transactional interpretation is envisaged as allowing a form of prescience because the collapse has implicit information about the future state of the universe in which the absorbers exist. This may appear logically paradoxical but no classical information is transferred so there is no inconsistency. Modelling the collapse appears to happen outside space-time, but actually it is instantaneous, so dual-time is just a core part of the heuristic to understand the non-linear process.

 

It is absolutely necessary for subjective conscious physical volition to be efficacious over mere computation, or it fails to confer an evolutionary advantage and would be eliminated over time by neutral and deleterious mutations in favour of purely computational brains. The fact that this hasn’t happened in the 2 bYa since the eucaryote emergence tells us it DOES have an advantage in terms of intuitive anticipation shared, by all animals, who unlike us, lack rational thought, and single celled eucaryotes who have nothing more than social neurotransmitters and excitable membranes to do the same uncanny trick. Therefore we have to look to physics and the nature of uncertainty to solve this, because environmental uncertainty has its root in quantum uncertainty, just as throwing a die does by setting off a butterfly-effect process.

 

This evolutionary advantage depends on a transformation of Doyle’s (1), in transactional collapse being a form of non-random hidden-variable theory in which non-local correlations of the universal wave function manifest as a complex system during collapse in a way that looks deceptively like randomness because it is a complex chaotic ergodic process. It then completely transforms part (1) of the two process model of volition because the intuitive choices are anticipatory, like integral transforms of the future which we can’t put into a logical causality without paradox, but which can coexist before collapse occurs.

 

There is thus a clear biological requirement for subjective conscious physical volition and that is to ensure survival of existential threats in the wild. We can imagine a computer attempting to do the two-process, by throwing up heuristic options on a weighted probabilistic basis process (1) and then optimising in a choice process (2). We can imagine this is also in a sense what we do when we approach a problem rationally. But that’s not what survival in the wild is about. It’s about computationally intractable environmental many body problems that also involve other conscious agents, snakes, tigers and other humans, so are formally and computationally undecidable. Hence the role of intuition.

 

The transactional interpretation as in fig 73, becomes the key to avoiding the mechanistic pseudo-deterministic random (1) plus computational (2) process of the two process decision-making and that is why we are able to exist and evolve as conscious anticipating sentient beings. You can imagine that an advanced AI package like chatGPT can get to the water hole but there is no evidence this is possible if it is covered in smelly food attractants, with unpredictable predators on the prowl. There is not even any good evidence that rational cognition can save our bacon. It all comes down to salience, sensory acuity, paranoia and intuition.

 

One may think one can depend on randomness alone to provide hypothetical heuristics and avoid getting "stuck in a rut", as a Hopfield network does by thermodynamic annealing and is also key to why the brain uses edge-of-chaos instability, but randomness is arbitrary and artificial. A computer uses the time and date to seed a non-random highly ergodic process to simulate randomness. All molecular billiards arises from a wave-particle process of spreading wave functions involving quantum uncertainty just as photons do. The same for decoherence models of collapse.

 

This is the ultimate flaw in relying on the two process approach of Doyle but it comes at the cost of a speculative leap about what is happening in von Neumann process 1. Quantum transactional collapse can occur instantaneously across space-time in a manner which may well be rationally contradictory about what time is, but is perfectly consistent with conscious intuition. If the universe is in a dynamical state between a multiverse and collapse to classicality, and conscious organisms, among other entities participate in collapse, we have a link between surviving environmental uncertainty and quantum indeterminacy. If this is just randomness no anticipatory advantage results, but if it is part of a delocalised complex system hidden variable theory it can.

 

Any attempt to think about it in a causal sequence or even reason it rationally to unravel intuition would lead to paradox, so rational thought can't capture it, but intuition does reveal it, but not in a way we can prove with high sigma causality statistics because to do that we have to invoke an IID process (independent identically-distributed set of measurements), which sends the whole process down the drain of the Born probability interpretation to randomness, when the biological reality in ever-changing brain states is that each step changes the measurement context, as a non-IID process, so it amounts to Schrödinger turtles all the way down.

 

I am prepared to make this quantum leap into retro-causal special relativistic transactions because it is consistent with quantum mechanics, it urgently needs to be stated and explored more than anything else because it has the key to why we are here as conscious sentient beings in this universe, in which life rises to climax conscious complexity.

 

Descartes originally claimed that consciousness requires an immaterial soul, which interacts with the body via the pineal gland of the brain.  Gert-Jan Lokhorst (2021) describes the details of how Descartes conceived the action of the pineal on a mechanical body, noting its antiquated basis:

 

The pineal gland played an important role in Descartesaccount because it was involved in sensation, imagination, memory and the causation of bodily movements. Unfortunately, however, some of Descartesbasic anatomical and physiological assumptions were totally mistaken, not only by our standards, but also in light of what was already known in his time. … The bodies of Descarteshypothetical men are nothing but machines: I suppose the body to be nothing but a statue or machine made of earth, which God forms with the explicit intention of making it as much as possible like us. The working of these bodies can be explained in purely mechanical terms.

 

Fig 88: Diagram from Descartes' Treatise of Man (1664), showing the formation of inverted retinal images in the eyes, and the transmission of these images, via the nerves so as to form a single, re-inverted image (an idea) on the surface of the pineal gland.

 

As a young man, Descartes had had a mystical experience in a sauna on the Danube: three dreams, which he interpreted as a message telling him to come up with a theory of everything and on the strength of this, dedicated his life to philosophy, leading to his iconic quote – Cogito ergo sum “I think therefore I am” – leading to Cartesian dualism, immortalised in the homunculus. This means that, in a sense, the Cartesian heritage of dualism is a genuine visionary attempt on Descartes’ part, to come to terms with his own conscious experience in terms of his cognition, in distinction from the world around him. Once the separation invoked by the term dualism is replaced by complementarity, we arrive at Darwinian panpsychism.

 

Experior, ergo sum, experimur, ergo sumus.

I experience therefore I am, we experience therefore we are!

  

Key to Descartes' interpretation, the active process that correspond to neuronal action potential are “animal spirits” running in the nerves from sense organ to the pineal and back out to the muscles:

 

In Descartesdescription of the role of the pineal gland, the pattern in which the animal spirits flow from the pineal gland was the crucial notion. He explained perception as follows. The nerves are hollow tubes filled with animal spirits. They also contain certain small fibers or threads which stretch from one end to the other. These fibers connect the sense organs with certain small valves in the walls of the ventricles of the brain. When the sensory organs are stimulated, parts of them are set in motion. These parts then begin to pull on the small fibers in the nerves, with the result that the valves with which these fibers are connected are pulled open, some of the animal spirits in the pressurized ventricles of the brain escape, and (because nature abhors a vacuum) a low-pressure image of the sensory stimulus appears on the surface of the pineal gland. It is this image which then causes sensory perceptionof whiteness, tickling, pain, and so on. It is not [the figures] imprinted on the external sense organs, or on the internal surface of the brain, which should be taken to be ideas—but only those which are traced in the spirits on the surface of the gland H (where the seat of the imagination and the commonsense is located).

 

This account is an attempt to explain in one model both subjective consciousness and volition over the world:

 

Finally, Descartes presented an account of the origin of bodily movements. He thought that there are two types of bodily movement. First, there are movements which are caused by movements of the pineal gland. The pineal gland may be moved in three ways: (1) by the force of the soul,provided that there is a soul in the machine; (2) by the spirits randomly swirling about in the ventricles; and (3) as a result of stimulation of the sense organs. The role of the pineal gland is similar in all three cases: as a result of its movement, it may come close to some of the valves in the walls of the ventricles. The spirits which continuously flow from it may then push these valves open, with the result that some of the animal spirits in the pressurized ventricles can escape through these valves, flow to the muscles by means of the hollow, spirit-filled nerves, open or close certain valves in the muscles which control the tension in those muscles, and thus bring about contraction or relaxation of the muscles. 

 

It also embraces higher functioning including imagination:

 

Imagination arises in the same way as perception, except that it is not caused by external objects. Continuing the just-quoted passage, Descartes wrote: And note that I say imagines or perceives by the senses. For I wish to apply the term ideagenerally to all the impressions which the spirits can receive as they leave gland H. These are to be attributed to the commonsense when they depend on the presence of objects; but they may also proceed from many other causes (as I shall explain later), and they should then be attributed to the imagination” 

 

Westphal (2016) notes:  According to Descartes, matter is essentially spatial, and it has the characteristic properties of linear dimensionality. Things in space have a position, at least, and a height, a depth, and a length, or one or more of these. Mental entities, on the other hand, do not have these characteristics. We cannot say that a mind is a two-by-two-by-two-inch cube or a sphere with a two-inch radius, for example, located in a position in space inside the skull. This is not because it has some other shape in space, but because it is not characterized by space at all.

 

 

The whole problem contained in such questions arises simply from a supposition that is false and cannot in any way be proved, namely that,
if the soul and the body are two substances whose nature is different, this prevents them from being able to act on each other
– René Descartes

 

Descartes is surely right about this. The natureof a baked Alaska pudding, for instance, is very different from that of a human being, since one is a pudding and the other is a human being — but the two can act on each otherwithout difficulty, for example when the human being consumes the baked Alaska pudding and the baked Alaska in return gives the human being a stomachache.

 

In a letter dated May 1643, Princess Elisabeth wrote to Descartes: I beg you to tell me how the human soul can determine the movement of the animal spirits in the body so as to perform voluntary acts — being as it is merely a conscious substance. For the determination of the movement seems always to come about from the moving bodys being propelled — to depend on the kind of impulse it gets from what it sets in motion, or again, on the nature and shape of this latter things surface. Now the first two conditions involve contact, and the third involves that the impelling [thing] has extension; but you utterly exclude extension from your notion of soul, and contact seems to me incompatible with a things being immaterial.

 

When Elisabeth and Descartes mention animal spirits(from the Greek physician and philosopher Galen) they are writing about something that plays roughly the role of signals in the nerve fibres of modern physiology. For Descartes, the animal spirits were not ghostly apparitions, but part of a theory that claimed that muscles were moved by inflation with air, the so-called balloonist theory. The animal spirits were fine streams of air that inflated the muscles and animal” does not mean the beasts, but derives from “anima,” the soul.

 

Secondly, when Elisabeth writes that you utterly exclude extension from your notion of soul,she is referring to the fact that Descartes defines mind and matter in such a way that the two are mutually exclusive. Mind is consciousness, which has no extension or spatial dimension, and matter is not conscious, since it is completely defined by its spatial dimensions and location. Since mind lacks a location and spatial dimensions, Elisabeth is arguing, it cannot make contact with matter, which requires physical impinging upon an object and ensuing motion.

 

Westphal (2016) notes:  We have inherited the sharp distinction between mind and body, though not exactly in Descartess form, but we have not inherited Descartess solution to the mind-body problem. So we are left with the problem, minus a solution. We see that the experiences we have, such as experiences of color, are indeed very different from the electromagnetic radiation that ultimately produces them, or from the activity of the neurons in the brain. We are bound to wonder how the uncolored radiation can produce the color, even if its effects can be followed as far as the neurons in the visual cortex. In other words, we make a sharp distinction between physics and physiology on the one hand, and psychology on the other, without a principled way to connect them.

 

Skirry (2023) notes: Human beings are supposed to be combinations of mind and body such that the minds choices can cause modes of motion in the body, and motions in certain bodily organs, such as the eye, cause modes of sensation in the mind. ... Take for example a voluntary choice, or willing, to raise ones hand in class to ask a question. The arm moving upward is the effect while the choice to raise it is the cause. But willing is a mode of the non-extended mind alone, whereas the arms motion is a mode of the extended body alone: how can the non-extended mind bring about this extended effect? It is this problem of voluntary bodily motion or the so-called problem of mind to body causationthat so troubled Gassendi and Elizabeth. The crux of their concern was that in order for one thing to cause motion in another, they must come into contact with one another as, for example, in the game of pool the cue ball must be in motion and come into contact with the eight-ball in order for the latter to be set in motion. The problem is that, in the case of voluntarily bodily movements, contact between mind and body would be impossible given the minds non-extended nature. This is because contact must be between two surfaces, but surface is a mode of body, as stated at Principles of Philosophy part II, section 15. Accordingly, the mind does not have a surface that can come into contact with the body and cause it to move. So, it seems that if mind and body are completely different, there is no intelligible explanation of voluntary bodily movement.

 

Lokhorst notes that only a few people accepted Descartespineal neurophysiology when he was still alive, and it was almost universally rejected after his death, partly because the pineal is no different in all mammals so cannot reflect the human soul. In philosophy, Cartesian interactionism immediately provoked either ridicule or disgust(Spinoza 1677), usually because it was seen as raising more problems than it solved, and it continues to do so to this day.

 

I do not forget, that the illustrious Descartes, though he believed, that the mind has absolute power over its actions, strove to explain human emotions by their primary causes, and, at the same time, to point out a way, by which the mind might attain to absolute dominion over them. However, in my opinion, he accomplishes nothing beyond a display of the acuteness of his own great intellect. … For he maintained, that the soul or mind is specially united to a particular part of the brain, namely, to that part called the pineal gland, by the aid of which the mind is enabled to feel all the movements which are set going in the body, and also external objects, and which the mind by a simple act of volition can put in motion in various ways. He asserted, that this gland is so suspended in the midst of the brain, that it could be moved by the slightest motion of the animal spirits: further, that this gland is suspended in the midst of the brain in as many different manners, as the animal spirits can impinge thereon; and, again, that as many different marks are impressed on the said gland, as there are different external objects which impel the animal spirits towards it; whence it follows, that if the will of the soul suspends the gland in a position, wherein it has already been suspended once before by the animal spirits driven in one way or another, the gland in its turn reacts on the said spirits, driving and determining them to the condition wherein they were, when repulsed before by a similar position of the gland. He further asserted, that every act of mental volition is united in nature to a certain given motion of the gland. For instance, whenever anyone desires to look at a remote object, the act of volition causes the pupil of the eye to dilate, whereas, if the person in question had only thought of the dilatation of the pupil, the mere wish to dilate it would not have brought about the result, inasmuch as the motion of the gland, which serves to impel the animal spirits towards the optic nerve in a way which would dilate or contract the pupil, is not associated in nature with the wish to dilate or contract the pupil, but with the wish to look at remote or very near objects. Lastly, he maintained that, although every motion of the aforesaid gland seems to have been united by nature to one particular thought out of the whole number of our thoughts from the very beginning of our life, yet it can nevertheless become through habituation associated with other thoughts. … He thence concludes, that there is no soul so weak, that it cannot, under proper direction, acquire absolute power over its passions. For passions as defined by him are "perceptions, or feelings, or disturbances of the soul, which are referred to the soul as species, and which (mark the expression) are produced, preserved, and strengthened through some movement of the spirits” (Spinoza 1677).

 

This reveals an important principle at the core of unravelling the mind-brain or consciousness-universe relationship. Elizabeth's critique is incorrect because she was assuming any interaction with a physical process has to be classically causal, but Descartes' reliance on a physical entity, the pineal made the same error, assuming the interaction portal between a subjective and objective is a classical physical entity, the pineal, rather than an unstable process. One of the key postulates of Symbiotic Existential Cosmology is that subjective and objective aspects of cosmological reality are linked through quantum uncertainty, as are several theories of consciousness that cite quantum physics. Symbiotic Existential Cosmology is also an interactionist cosmology that posits an active relationship between subjective consciousness and the physical universe through the efficacy of subjective conscious volition, but it does so without "nailing it's knickers to the petard" of some exceptional quantum process that likely will come to stick its head out for the empirical "chopping block", such as Hameroff-Penrose does, by combining specific unestablished microtubular quantum computations, and gravitational collapse theories of quantum uncertainty, but rather asserts that the normal biological processes of brain dynamics achieve these capacities as they stand, thus retaining concordance between empirical neuroscience and physics unscathed by contingent exceptional assumptions.

 

The traditional view of subjective consciousness stemming from Thomas Huxley is that of epiphenomenalism –  the view that mental events are caused by physical events in the brain, but have no effects upon any physical events.

 

The way paradigm shifts can occur can be no more starkly illustrated than in the way in which epiphenomenalism, behaviourism and pure materialism, including reductionism came to dominate the scientific view of reality and the conscious mind.

 

Fig 89: A decapitated frog uses its right foot to try to remove burning acid
but when it is cut off it uses its left, although having no brain.

 

Huxley (1874) held the view, comparing mental events to a steam whistle that contributes nothing to the work of a locomotive. William James (1879), rejected this view, characterising epiphenomenalistsmental events as not affecting the brain activity that produces them any more than a shadow reacts upon the steps of the traveller whom it accompanies – thus turning subjective consciousness from active agency to being a mere passenger. Huxley’s essay likewise compares consciousness to the sound of the bell of a clock that has no role in keeping the time, and treats volition simply as a symbol in consciousness of the brain-state cause of an action. Non-efficacious mental events are referred to in this essay as collateral productsof their physical causes.

 

Klein (2021), in continuing paragraphs, notes that the story begins with Eduard Pflüger’s 1853 experiments showing that some decapitated vertebrates exhibit behaviour it is tempting to call purposive. The results were controversial because purposive behaviour had long been regarded as a mark of consciousness. Those who continued to think it was such a mark had to count a pithed frog and presumably, a chicken running around with its head cut off as conscious. You can see such ideas echoing today in theories such as Solms and Friston's (2018) brain-stem based model of consciousness.

 

But this view opened the way for epiphenomenalism: just as pithed frogs seem to act with purpose even though their behaviour is not really guided by phenomenal consciousness, so intact human behaviours may seem purposive without really being guided by phenomenal consciousness.

 

Fig 90: Representation of consciousness from the seventeenth
century by Robert Fludd, an English Paracelsian physician
.

 

Descartes had famously contended that living animals might be like machines in the sense of being non-conscious organisms all of whose behaviours are produced strictly mechanistically. Those in the seventeenth and eighteenth century who adopted a broadly Cartesian approach to animal physiology are often called mechanists, and their approach is typically contrasted with so-called animists. What separated the two groups was the issue of whether and to what extent the mechanical principles of Newton and Boyle could account for the functioning of living organisms.

 

Even for those more inclined towards mechanism, though, animistic tendencies still underlay much physiological thinking throughout the early modern period. For instance, Giovanni Borelli (16081679) had developed a mechanistic account of how the heart pumps blood. But even Borelli gave the soul a small but important role in this motion. Borelli contended that the unpleasant accumulation of blood in the heart of the preformed embryo would be perceived by the sentient faculty(facultas sensitiva) of the soul through the nerves, which would then prompt the ventricle to contract. Only after the process was thus initiated would the circulation continue mechanistically, as a kind of physical, acquired habit. But the ultimate cause of this motion was the soul.

 

Now, suppose one accepts purposive behaviour as a mark of consciousness (or sensation, or volition, or all of these). Then one arrives at a surprising result indeed that the brainless frog, properly prepared, remains a conscious agent. Of course, there is a lot riding on just what is meant by consciousness’, ‘sensation, and volition. Pflüger himself often wrote about the decapitated frogs supposed consciousness(Bewusstsein), but was rather loose and poetic in spelling out what that term was to mean. Still, his general thesis was clear enough: that in addition to the brain, the spinal cord is also an organ that independently produces consciousness. One controversial implication is that consciousness itself may be divisible (and so literally extended; see Huxley, 1870 5–6) – it may exist in various parts of the nervous system, even in a part of the spinal cord that has been divided from the brain (Fearing 1930 162–3).

 

Lotzes thought was that these behaviours seem purposive only because they are complex. If we allow that the nervous system can acquire complex, reflexive actions through bodily learning, then we can maintain that these behaviours are mechanically determined, and not guided or accompanied by any phenomenal consciousness. The difficulty with this response is that pithed frogs find ways to solve physical challenges they cannot be supposed to have faced before being pithed. For instance, suppose one places a pithed frog on its back, holds one leg straight up, perpendicular to the body, and irritates the leg with acid. The pithed frog will then raise the other leg to the same, odd position so as to be able to wipe away the irritant (Huxley 1870 3). Huxley also reports that a frog that is pithed above the medulla oblongata (but below the cerebellum) loses the ability to jump, even though the frog with the brain stem and cerebellum both intact is able to perform this action, at least in response to irritation. A frog pithed just below the cerebrum can see, swallow, jump, and swim, though still will typically move only if prompted by an outer stimulus (Huxley 1870 3–4).

 

Now what does Lewes mean by sensationand volition’?   

 

Do what we will, we cannot altogether divest Sensibility of its psychological con- notations, cannot help interpreting it in terms of Consciousness; so that even when treating of sensitive phenomena observed in molluscs and insects, we always imagine these more or less suffused with Feeling, as this is known in our own conscious states.  (Lewes 1877 188–9)

 

He saw that one must first settle an important issue before it is possible to interpret these experiments. He wrote, “we have no proof, rigorously speaking, that any animal feels; none that any human being feels; we conclude that men feel, from certain external manifestations, which resemble our own, under feeling; and we conclude that animals feel on similar grounds.”

 

Now, inasmuch as the actions of animals furnish us with our sole evidence for the belief in their feeling, and this evidence is universally considered as scientifically valid, it is clear that similar actions in decapitated animals will be equally valid; and when I speak of proof, it is in this sense. Spontaneity and choice are two signs which we all accept as conclusive of sensation and volition. (Lewes 1859 237–8).

  

Does Pflüger’s experiment prove that there is sensation or volition in the pithed frog? We cannot tell, Lewes suggests, until we first settle on some third-person-accessible mark of sensation and volition. And the marks Lewes proposes are spontaneity and choice.

 

For Lewes, every physiological change is in some sense sensory, and every physiological change thereby influences the stream of Consciousness, however slightly.

 

Thomas Huxley (1874) offered the most influential and provocative version of the conscious automaton theory in an address in Belfast. According to this view, consciousness, synonymous with Lewes’ ‘sensationaccompanies the body without acting on it, just as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery. Conscious states are continually being caused by brain states from moment to moment, on this view, but are themselves causally inert. In other words, although Huxley accepted the existence of sensation, he rejected the existence of volition(as Lewes had used that word). This is an early form of epiphenomenalism.

 

Pflüger and Lewes had indeed established the existence of purposive behaviour in pithed frogs, Huxley readily conceded (Huxley 1874 223). But since it is absurd (according to Huxley) to think the behaviour of brainless frogs is under conscious control, the correct lesson to draw from Pflüger and Lewesresults was that purposive actions are not sufficient to establish volition. In fact, Huxley evidently was unwilling to accept the existence of any behavioural mark of either sensation or volition.

 

It must indeed be admitted, that, if any one think fit to maintain that the spinal cord below the injury is conscious, but that it is cut off from any means of making its consciousness known to the other consciousness in the brain, there is no means of driving him from his position by logic. But assuredly there is no way of proving it, and in the matter of consciousness, if in anything, we may hold by the rule, De non apparentibus et de non existentibus eadem est ratio’ [‘what does not appear and what does not exist have the same evidence’].

(Huxley, 1874, 220)

 

The mechanist’s dilemma is the following ‘paradox’:

 

A: If one accepts any behavioural mark of sensation and volition, then the experimental data will force us to attribute sensation and volition to both decapitated and intact vertebrates alike.

B: If one rejects the existence of a behavioural mark, then one has no grounds for ascribing sensation or volition to either decapitated or intact vertebrates.

 

Huxleys pronouncement piggybacks on the position he took in the mechanist’s dilemma. His claim that spinal consciousness cannot be observed amounts to the claim that such a consciousness cannot be observed first-personally. But that is the crux of the mechanist’s dilemma.  

 

Huxley nevertheless was reverential of the contribution made by Rene Descartes in understanding the physiology of the brain and body:

 

The first proposition culled from the works of Descartes which I have to lay before you, is one which will sound very familiar. It is the view, which he was the first, so far as I know, to state, not only definitely, but upon sufficient grounds, that the brain is the organ of sensation, of thought, and of emotion-using the word "organ" in this sense, that certain changes which take place in the matter of the brain are the essential antecedents of those states of consciousness which we term sensation, thought and emotion. ... It remained down to the time of Bichat [150 years later] a question of whether the passions were or were not located in the abdominal viscera. In the second place, Descartes lays down the proposition that all movements of animal bodies are affected by a change in form. of a certain part of the matter of their bodies, to which he applies the general term of muscle.

 

The process of reasoning by which Descartes arrived at this startling conclusion is well shown in the following passage of the “Réponses:”– But as regards the souls of beasts, although this is not the place for considering them, and though, without a general exposition of physics, I can say no more on this subject than I have already said in the fifth part of my Treatise on Method; yet, I will further state, here, that it appears to me to be a very remarkable circumstance that no movement can take place, either in the bodies of beasts, or even in our own, if these bodies have not in themselves all the organs and instruments by means of which the very same movements would be accomplished in a machine. So that, even in us, the spirit, or the soul, does not directly move the limbs, but only determines the course of that very subtle liquid which is called the animal spirits, which, running continually from the heart by the brain into the muscles, is the cause of all the movements of our limbs, and often may cause many different motions, one as easily as the other.

 

Descartesline of argument is perfectly clear. He starts from reflex action in man, from the unquestionable fact that, in ourselves, co-ordinate, purposive, actions may take place, without the intervention of consciousness or volition, or even contrary to the latter. As actions of a certain degree of complexity are brought about by mere mechanism, why may not actions of still greater complexity be the result of a more refined mechanism? What proof is there that brutes are other than a superior race of marionettes, which eat without pleasure, cry without pain, desire nothing, know nothing, and only simulate intelligence as a bee simulates a mathematician? ... Suppose that only the anterior division of the brainso much of it as lies in front of the optic lobes” – is removed. If that operation is performed quickly and skilfully, the frog may be kept in a state of full bodily vigour for months, or it may be for years; but it will sit unmoved. It sees nothing: it hears nothing. It will starve sooner than feed itself, although food put into its mouth is swallowed. On irritation, it jumps or walks; if thrown into the water it swims.

 

Klein (2018) notes that he crux of the paradigm shift was the competing research by the opposing groups and the way in which their research successes at the time led to success:

 

But by the time of the Lewes contribution from 1877, the question was no longer whether this one subset of muscular action could be accounted for purely mechanistically. Now, the question had become whether the mechan- istic approach to reflex action might be expanded to cover all muscular action. Lewes wrote that the Reflex Theoryhad become a strategy where one attempted to specify the elementary parts involvedin every physiological function without ever appealing to Sensation and Volition(Lewes, Problems of Life and Mind, 354).24

 

That the majority of physiological opinion by the close of the century was in favor of the position of Pflüger’s opponents seems certain, Fearing writes. Mechanistic physiology and psychology was firmly seated in the saddle(Fearing, 1930, 185).

 

The concept of a mechanistic reflex arc came to dominate not just physiology, but psychology too. The behaviourist B. F. Skinner, for example, wrote his 1930 doctoral dissertation on how to expand the account of reflex action to cover all behaviour, even the behaviour of healthy organisms. Through the innovations of people like Skinner and, before him, Pavlov, behaviourism would establish itself as the dominant research paradigm.

 

Cannon (1911, 38) gave no real argument for why students should not regard purposive movement as a mark of genuine volition (beyond a quick gesture at Lotzes long-discredited retort to Pflüger). Without citing any actual experiments, Cannon simply reported, as settled scientific fact, that purposiveness does not entail intended action:

 

Purposive movements are not necessarily intended movements. It is probable that reaction directed with apparent purposefulness is in reality an automatic repetition of movements developed for certain effects in the previous experience of the intact animal. (ibid)

 

Schwartz et al. (2005) highlight the key role William James played in establishing the status of volitional will:

 

William James (1890 138) argued against epiphenomenal consciousness, by claiming that The particulars of the distribution of consciousness, so far as we know them, points to its being efficacious.’  James (136) stated that 'consciousness is at all times primarily a selecting agency.It is present when choices must be made between different possible courses of action. It is to my mind quite inconceivable that consciousness should have nothing to do with a business to which it so faithfully attends’.

 

These liabilities of the notion of epiphenomenal mind and consciousness lead many thinkers to turn to the alternative possibility that a persons mind and stream of consciousness is the very same thing as some activity in their brain: mind and consciousness are emergent propertiesof brains. A huge philosophical literature has developed arguing for and against this idea.

 

They cite Sperry, who adopted an identity theory approach which he claimed was monist, in invoking a top-down systems theoretic notion of the mind as an abstraction of certain higher-level brain processes:

 

The core ideas of the arguments in favour of an identity-emergent theory of mind and consciousness are illustrated by Roger Sperrys (1992) example of a wheel. A wheel obviously does something: it is causally efficacious; it carries the cart. It is also an emergent property: there is no mention of wheelnessin the formulation of the laws of physics and wheelnessdid not exist in the early universe; wheelnessemerges only under certain special conditions. And the macro-scopic wheel exercises top-downcontrol of its tiny parts. ... The reason that mind and consciousness are not analogous to wheelness, within the context of classic physics, is that the properties that characterize wheelnessare properties that are entailed, within the conceptual framework of classic physics, by properties specified in classic physics, whereas the properties that characterize conscious mental processes, namely the various ways these processes feel, are not entailed within the conceptual structure provided by classic physics, but by the properties specified by classic physics.

 

They quote James again in their theory of volition, based on the repeated application of attention to the issue at hand:

 

In the chapter on will, in the section entitled Volitional effort is effort of attention, James (1892 417) writes: “Thus we find that we reach the heart of our inquiry into volition when we ask by what process is it that the thought of any given action comes to prevail stably in the mind. ... The essential achievement of the will, in short, when it is most voluntary,is to attend to a difficult object and hold it fast before the mind. Effort of attention is thus the essential phenomenon of will. ... Consent to the ideas undivided presence, this is efforts sole achievement. Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away”.

 

Enshrining the concept of pure behaviourism (Stout 2003, Tanney 2022), and reductionism more generally Gilbert Ryle (1949) claimed in “The Concept of Mind” that "mind" is "a philosophical illusion hailing from René Descartes, and sustained by logical errors and 'category mistakes' which have become habitual. Ryle rejected Descartes' theory of the relation between mind and body, on the grounds that it approaches the investigation of mental processes as if they could be isolated from physical processes. According to Ryle, the classical theory of mind, or "Cartesian rationalism," makes a basic category mistake (a new logical fallacy Ryle himself invented), as it attempts to analyze the relation between "mind" and "body" as if they were terms of the same logical category. The rationalist theory that there is a transformation into physical acts of some purely mental faculty of "Will" or "Volition" is therefore a misconception because it mistakenly assumes that a mental act could be and is distinct from a physical act, or even that a mental world could be and is distinct from the physical world. This theory of the separability of mind and body is described by Ryle as "the dogma of the ghost in the machine.”  However Ryle was not regarded as a philosophical behaviourist and writes that the "general trend of this book will undoubtedly, and harmlessly, be stigmatised as ‘behaviourist’."

 

Symbiotic Existential Cosmology, classes itself as ICAM interactively complementary aspect monism, rather than dualism. The Stanford Encyclopaedia of Philosophy definitions for dualism (Robinson 2023) are:

 

Genuine property dualism occurs when, even at the individual level, the ontology of physics is not sufficient to constitute what is there. The irreducible language is not just another way of describing what there is, it requires that there be something more there than was allowed for in the initial ontology. Until the early part of the twentieth century, it was common to think that biological phenomena (‘life’) required property dualism (an irreducible ‘vital force’), but nowadays the special physical sciences other than psychology are generally thought to involve only predicate dualism (that psychological or mentalistic predicates are (a) essential for a full description of the world and (b) are not reducible to physicalistic predicates). In the case of mind, property dualism is defended by those who argue that the qualitative nature of consciousness is not merely another way of categorizing states of the brain or of behaviour, but a genuinely emergent phenomenon.

 

Substance dualism: There are two important concepts deployed in this notion. One is that of substance, the other is the dualism of these substances. A substance is characterized by its properties, but, according to those who believe in substances, it is more than the collection of the properties it possesses, it is the thing which possesses them. So the mind is not just a collection of thoughts, but is that which thinks, an immaterial substance over and above its immaterial states.

 

In Stanford, Tanney (2022) notes Ryle’s category error critique was centrally about the assumed distinctness or separability of mind and body as “substances” in the context of absurdity of certain verbal sentence constructions:

 

When a sentence is (not true or false but) nonsensical or absurd, though its vocabulary is conventional and its grammatical construction is regular, we say that it is absurd because at least one ingredient expression in it is not of the right type to be coupled or to be coupled in that way with the other ingredient expression or expressions in it. Such sentences, we may say, commit type-trespasses or break type-rules. (1938, 178)

 

The category mistake Ryle identifies in “There is a mind and a body” or “there is a mind or a body” is less obvious. For it takes a fair bit of untangling to show that “mind” and “body” are different logical or grammatical types; a fact which renders the assertion of either the conjunction or the disjunction nonsensical.

 

Robinson (2023) further notes both the veridical affirmation of interactivity in everyday life and the unverifiability of physical causal closure:

 

Interactionism is the view that mind and body – or mental events and physical events – causally influence each other. That this is so is one of our common-sense beliefs, because it appears to be a feature of everyday experience. The physical world influences my experience through my senses, and I often react behaviourally to those experiences. My thinking, too, influences my speech and my actions. There is, therefore, a massive natural prejudice in favour of interactionism. 

 

Causal Closure Most discussion of interactionism takes place in the context of the assumption that it is incompatible with the world's being 'closed under physics'. This is a very natural assumption, but it is not justified if causal overdetermination of behaviour is possible. There could then be a complete physical cause of behaviour, and a mental one. The problem with closure of physics may be radically altered if physical laws are indeterministic, as quantum theory seems to assert. If physical laws are deterministic, then any interference from outside would lead to a breach of those laws. But if they are indeterministic, might not interference produce a result that has a probability greater than zero, and so be consistent with the laws? This way, one might have interaction yet preserve a kind of nomological closure, in the sense that no laws are infringed. … Some argue that indeterminacy manifests itself only on the subatomic level, being cancelled out by the time one reaches even very tiny macroscopic objects: and human behaviour is a macroscopic phenomenon. Others argue that the structure of the brain is so finely tuned that minute variations could have macroscopic effects, rather in the way that, according to 'chaos theory', the flapping of a butterfly's wings in China might affect the weather in New York. (For discussion of this, see Eccles (1980), (1987), and Popper and Eccles (1977).) Still others argue that quantum indeterminacy manifests itself directly at a high level, when acts of observation collapse the wave function, suggesting that the mind may play a direct role in affecting the state of the world (Hodgson 1988; Stapp 1993).

 

Symbiotic Existential Cosmology does not assert “substance” dualism, as subjective conscious volition is not treated as a “substance”, in the way mind was in the manner of objective physical entities, in Ryle's complaint against Cartesian dualism. SEC invokes a unified Cosmos in which primal subjectivity and the objective universe are complementary mutually-interactive principles in a universe which is not causally closed and in which volitional will can act without causal conflict, through quantum uncertainty. Life is also subject to overdeterminism due to teleological influences such as autopoiesis, e.g. in the negentropic nature of life and evolution as self-organising far-from-equilibrium thermodynamic systems, however this is insufficient to eliminate the need for conscious-physical interactivity to ensure survival. The subjective aspect is fully compliant with determined physical boundary conditions of brain states , except in so far as subjective volition interacts with environmental quantum-derived uncertainty through quantum-sensitive unstable brain dynamics, forming a contextual filter theory of brain function on conscious experience, rather than a causally-closed universe determining ongoing brain states. Thus, no pure-subjective interactivity is required, as occurs in traditional forms of panpsychism, such as pan-proto- or cosmo-psychism.

 

The key counter to Ryle's complaint is that if I say in response to a received e-mail that the author has demonstrated through consciously intending to compose and send their response in physical form that "you have demonstrated that your subjective conscious volition has efficacy over the physical universe" this is not grammatically, semantically, or categorically absurd, but a direct empirical observation from experience that raises no physical or philosophical inconsistencies, but fully confirms empirical experience of subjective physical conscious agency, consistent with civil and criminal law of conscious intentional responsibility. Ryle's strategy is linguistic. He attacks both the ontological commitment (the view that mind and body are somehow fundamentally different or distinct, but nonetheless interact) and the epistemological commitment (the inability to confirm other people are conscious because subjectivity is private) of what he calls the "official doctrine" (Tanney 2022). The problem is that, by dealing with it in a purely linguistic analysis, we are dealing only with objective semantic and grammatical connotations so the argument is intrinsically objective. We know that subjectivity is private and objectivity is public. That's just the way it is! We also know that in all our discourses subjective-objective interactivity occurs. A hundred percent of our experience is subjective and the world around us is inferred from our subjectively conscious experiences of it.

 

The way out is not to deny mind, or consciousness itself which we are all trying to fathom, or we are back to the hard problem of the objectively unfathomable explanatory gap. The way out is that the above statement "you have demonstrated that your subjective conscious volition has efficacy over the physical universe" is something that also involves conscious physical volition we can mutually agree on because it's evidenced in our behaviour in consciously responding to one another. Ryle is sitting by himself in his office dreaming up linguistic contradictions, but these evaporate through mutual affirmation of subjective volition. That's the transactional principle manifest. Then the category error vanishes in the subjective empirical method. This is why extending the hard problem to volition has been essential, because it's the product of conscious volition in behaviour that is verifiable.

 

In Stanford (Tanney 2022) notes that Cartesianism is at worst "dead" in only one of its ontological aspects. Substance dualism may have been repudiated but property dualism still claims a number of contemporary defenders. Furthermore, although Descartes  embraced a form of substance dualism, in the sense that the pineal acted in response to the soul by making small movements that initiated wider responses in the brain, the pineal is still a biological entity, so the category error is misconceived. His description is remarkably similar to instabilities in brain dynamics potentially inducing global changes in brain dynamics. Compounded with the inability of materialism to solve the hard problem, science is thus coming full circle. It is not just a question of sentence construction but Cosmology.

 

But Ryle’s rejection of Cartesian dualism led to a second paradigm shift in which molecular biology, succeeding Watson and Crick’s discovery of the structure of DNA, led to ever more effective ‘laying bare’ of all biological processes including the brain, accompanied by new technologies of multi-electrode EEG and MEG and functional fMRI imaging using magnetic resonance imaging. So that subjective consciousness became effectively ignored in the cascade of purely functionalist results of how human brain dynamics occurs.

 

Anil Seth (2018) notes:

 

The relationship between subjective conscious experience and its biophysical basis has always been a defining question for the mind and brain sciences. But, at various times since the beginnings of neuroscience as a discipline, the explicit study of consciousness has been either treated as fringe or excluded altogether. Looking back over the past 50 years, these extremes of attitude are well represented. Roger Sperry (1969, 532), pioneer of split-brain operations and of what can now be called consciousness sciencelamented in 1969 that most behavioral scientists today, and brain researchers in particular, have little use for consciousness. Presciently, in the same article he highlighted the need for new technologies able to record the pattern dynamics of brain activityin elucidating the neural basis of consciousness. Indeed, modern neuroimaging methods have had a transformative impact on consciousness science, as they have on cognitive neuroscience generally.

 

Informally, consciousness science over the last 50 years can be divided into two epochs. From the mid-1960s until around 1990 the fringe view held sway, though with several notable exceptions. Then, from the late 1980s and early 1990s, first a trickle and more recently a deluge of research into the brain basis of consciousness, a transition catalysed by among other things the activities of certain high-profile scientists (e.g. the Nobel laureates Francis Crick and Gerald Edelman) and by the maturation of modern neuroimaging methods, as anticipated by Sperry.

 

Symbiotic cosmology, based on complementary, unlike a strictly dualist description, is coherent. This coherence – forming a complete whole without discrete distinction – is manifestly true in that we can engage either a subjective discourse on our experiences or an objective account of their material circumstances in every situation in waking life, just as the wave and particle aspects of quanta are coherent and cannot be separated, as complementary manifestations. We thus find that the human discourse on our existential condition has two complementary modes, the one fixed in the objective physical description of the world around us using logical and causal operations and the other describing our subjective conscious experiences, as intelligent sensual beings, which are throughout our lives, our sole source of personal knowledge of the physical world around us, without which we would have no access to the universe at large, let alone to our dreams, memories and reflections (Jung 1963), all of which are conscious in nature, and often ascribed to be veridical, rather than imaginary, in the case of dreams and visionary states.

 

In Erwin Schrödinger’s words (1944):  The world is a construction of our sensations, perceptions, memories. It is convenient to regard it as existing objectively on its own. But it certainly does not become manifest by its mere existence” … “The reason why our sentient, percipient and thinking ego is met nowhere within our scientific world picture can easily be indicated in seven words: Because it is itself that world picture”.

 

A central problem faced by detractors of the role of consciousness in both the contexts of the brain and the quantum universe is that many of the materialist arguments depend on an incorrectly classical view of causality, or causal closure, in the context of brain dynamics, which are fundamentally inconsistent with quantum reality. In the brain context, this is purported to eliminate an adaptive role for consciousness in human and animal survival, reducing it to a form of epiphenomenalism, in which volitional will would be a self-serving delusion. This follows lines of thinking derived from computational ideas that interfering with a computational process would hinder its efficiency.

 

In relation to volitional will, Chalmers & McQueen (2021) note: There are many aspects to the problem of consciousness, including the core problem of why physical processes should give rise to consciousness at all.  One central aspect of the problem is the consciousness-causation problem: It seems obvious that consciousness plays a causal role, but it is surprisingly hard to make sense of what this role is and how it can be played.

 

The problem with the idea of objective brain processing being causally closed is fivefold. Firstly the key challenges to organismic survival are computationally intractable, open environment problems which may be better served by edge of chaos dynamics than classical computation. Secondly, many problems of survival are not causally closed at all because both evolution and organismic behaviour are creative processes, in which there are many viable outcomes, not just a single logically defined, or optimal one. Thirdly, quantum uncertainty and its deeper manifestations in entanglement, are universal, both in the brain and the environment, so there are copious ways for consciousness to intervene, without disrupting causally deterministic processes, and this appears to be its central cosmological role. Fourthly, the notion runs headlong into contradiction with our everyday experience of volition, in which we are consciously aware of our volitional intent and of its affects both in our purposive decision-making and acts affecting the world around us. For causal closure to be true, all our purposive decisions upon which we depend for our survival would be a perceptual delusion, contradicting the manifest nature of veridical perception generally.  Fifthly, the work of Libet through to Schurger et al. demonstrates causal closure is unproven and is unlikely to remain so given the edge-of-chaos instability of critical brain processes in decision-making in the quantum universe.

 

The Readiness Potential and its Critics

 

Challenging the decision-making role of consciousness, Libet (1983, 1989) asked volunteers to flex a finger or wrist. When they did, the movements were preceded by a dip in the brain signals being recorded, called the "readiness potential". He interpreted this RP a few tenths of a second before the volunteers said they had decided to move, as the brain preparing for movement. Libet concluded that unconscious neural processes determine our actions before we are ever aware of making a decision. Since then, others have quoted the experiment as evidence that free will is an illusion.

 

However Libet (1999) in "Do we have free-will?”, himself makes the most convincing case possible for  subjective consciousness  having the capacity for free-will:

 

I have taken an experimental approach to this question. Freely voluntary acts are preceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350–400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility.

 

But the deeper question still remains: Are freely voluntary acts subject to macro- deterministic laws or can they appear without such constraints, non-determined by natural laws and ‘truly free’? I shall present an experimentalist view about these fundamental philosophical opposites. ... The question of free will goes to the root of our views about human nature and how we relate to the universe and to natural laws. Are we completely defined by the deterministic nature of physical laws? Theologically imposed fateful destiny ironically produces a similar end-effect. In either case, we would be essentially sophisticated automatons, with our conscious feelings and intentions tacked on as epiphenomena with no causal power. Or, do we have some independence in making choices and actions, not completely determined by the known physical laws? The initiation of the freely voluntary act appears to begin in the brain unconsciously, well before the person consciously knows he wants to act! Is there, then, any role for conscious will in the performance of a voluntary act? (see Libet, 1985). To answer this it must be recognized that conscious will (W) does appear about 150 msec. before the muscle is activated, even though it follows onset of the RP.

 

Potentially available to the conscious function is the possibility of stopping or vetoing the final progress of the volitional process, so that no actual muscle action ensues. Conscious-will could thus affect the outcome of the volitional process even though the latter was initiated by unconscious cerebral processes. Conscious-will might block or veto the process, so that no act occurs. The existence of a veto possibility is not in doubt. The subjects in our experiments at times reported that a conscious wish or urge to act appeared but that they sup- pressed or vetoed that. … My conclusion about free will, one genuinely free in the non-determined sense, is then that its existence is at least as good, if not a better, scientific option than is its denial by determinist theory. Given the speculative nature of both determinist and non-determinist theories, why not adopt the view that we do have free will (until some real contradictory evidence may appear, if it ever does). Such a view would at least allow us to proceed in a way that accepts and accommodates our own deep feeling that we do have free will. We would not need to view ourselves as machines that act in a manner completely controlled by the known physical laws.

 

Nevertheless, articulating a theory heavily dependent on the readiness potential, Budson et al. (2022) claim all the brain’s decision-making procedures are unconscious, but followed half a second later by conscious experience that is just a memory-based constructive representation of future outcomes. According to the researchers, this theory is important because it explains that all our decisions and actions are actually made unconsciously, although we fool ourselves into believing that we consciously made them:

 

In a nutshell, our theory is that consciousness developed as a memory system that is used by our unconscious brain to help us flexibly and creatively imagine the future and plan accordingly. What is completely new about this theory is that it suggests we don’t perceive the world, make decisions, or perform actions directly. Instead, we do all these things unconsciously and then—about half a second later—consciously remember doing them. We knew that conscious processes were simply too slow to be actively involved in music, sports, and other activities where split-second reflexes are required. But if consciousness is not involved in such processes, then a better explanation of what consciousness does was needed.

 

But this notion is itself a delusion. The conscious brain has evolved to be able to co-opt very fast subconscious processes to orchestrate in real time, highly accurate, innovative conscious responses, which the agent is fully aware of exercising in real time. The evidence is that conscious control of subconscious fast processing, e.g. via insular von-Economo neurons, and basal ganglia, occurs in parallel in real time. Tennis could not be played if the players’ conscious reactions were half a second behind the ball. They could not represent, or accurately respond to the actual dynamics.

 

Likewise Earl (2014) cite the notion that consciousness is solely information in various forms that is associated with a flexible response mechanism (FRM) for decision-making, planning, and generally responding in nonautomatic ways. Both these are tautologous because information is both subjective and objective and non-conscious responses ARE physically automatic. Earl attempts to discount the validity of our subjective experience of volition by claiming it is a false assumption and fails to include all the mechanical details of how an act is generated:

 

When I decide to pick up a cup and do so, I may believe that my thought initiates my action, but what I observe is I have the thought of picking up the cup and then reach out and take the cup. I do not experience the information manipulations that must occur to initiate my action, and I have no evidence that my action is consciously initiated. One tends to assume one’s intentional actions are consciously initiated, but as Wegner and Wheatley (1999) reported, we may perceive our actions as consciously caused if the thought occurs before the act and is consistent with the act, and there are no other likely causes.

 

While this is not going so far as to claim the conscious experience of volition is a delusion that evolved to give the epiphenomenal organism confidence in its ability to act, it is incorrectly claiming our experience of willed intentional decision making behaviour, key to our survival, is a false assumption, associating unconnected causes and effects:

 

In any intentional action, one never experiences the complete sequence of events from the starting conditions to completing the action. Bowers (1984, p. 249) wrote that “one can introspectively notice and/or recall antecedents of one’s behavior but the causal connection linking the determining antecedents and the behavior to be explained is simply not directly accessible to introspection. Rather, the causal link between antecedents and their consequences is provided by an inference, however implicit and invisible.” There are gaps in every experience of intentional choice, intentional initiation of responses, intentional control of attention or behavior, and in thinking, speaking, problem solving, creativity, and every other action with which consciousness is associated; and in each of these activities the executive mental process is missing from consciousness. 

 

These arguments do not constitute a valid critique, given the ability of non-conscious processes  to complement  and prepare the experiential context for a comprehensive conscious decision. To have to experience every mechanical aspect of an intentional action would subject the flow of subjective consciousness to strategic overload and obliterate the  efficiency of the FRM model. Conscious experience gives us the effective overview to act decisively in real time.

 

Libet’s claim has been undermined by more recent studies. Bredikhin et al.(2023) have discovered confounding faults in Libet's procedure. Instead of letting volunteers decide when to move, Trevena and Miller (2010) asked them to wait for an audio tone before deciding whether to tap a key. If Libet's interpretation were correct, the RP should be greater after the tone when a person chose to tap the key. While there was an RP before volunteers made their decision to move, the signal was the same whether or not they elected to tap. Miller concludes that the RP may merely be a sign that the brain is paying attention and does not indicate that a decision has been made. They also failed to find evidence of subconscious decision-making in a second experiment. This time they asked volunteers to press a key after the tone, but to decide on the spot whether to use their left or right hand. As movement in the right limb is related to the brain signals in the left hemisphere and vice versa, they reasoned that if an unconscious process is driving this decision, where it occurs in the brain should depend on which hand is chosen, but they found no such correlation.

 

Schurger and colleagues (2012) have a key explanation. Previous studies have shown that, when we have to make a decision based on sensory input, assemblies of neurons start accumulating evidence in favour of the various possible outcomes. The team reasoned that a decision is triggered when the evidence favouring one particular outcome becomes strong enough to tip the dynamics – i.e. when the neural noise generated by random or chaotic activity accumulates sufficiently so that its associated assembly of neurons crosses a threshold tipping point. The team repeated Libet's experiment, but this time if, while waiting to act spontaneously, the volunteers heard a click they had to act immediately. The researchers predicted that the fastest response to the click would be seen in those in whom the accumulation of neural noise had neared the threshold - something that would show up in their EEG as a readiness potential. In those with slower responses to the click, the readiness potential was indeed absent in the EEG recordings. "We argue that what looks like a pre-conscious decision process may not in fact reflect a decision at all. It only looks that way because of the nature of spontaneous brain activity.” Schurger and Uithol (2015) specifically note the evidence of a sensitively dependent butterfly effect (London et al. 2010) as a reason why nervous systems vary their responses on identical stimuli as an explanation for why it could be impossible to set out a deterministic decision making path from contributory systems to a conscious decision, supporting their stochastic accumulator model. Hans Liljenström (2021) using stochastic modelling concludes that if decisions have to be made fast, emotional processes and aspects dominate, while rational processes are more time consuming and may result in a delayed decision.

 

Alexander et al. (2016) establish the lack of linkage of the RP to motor activity:

 

“The results reveal that robust RPs occured in the absence of movement and that motor-related processes did not significantly modulate the RP. This suggests that the RP measured here is unlikely to reflect preconscious motor planning or preparation of an ensuing movement, and instead may reflect decision-related or anticipatory processes that are non-motoric in nature.”

 

More recently the actual basis coordinating a decision to act has been found to reside in slowly evolving dopamine modulation. When you reach out to perform an action, seconds before you voluntarily extend your arm, thousands of neurons in the motor regions of your brain erupt in a pattern of electrical activity that travels to the spinal cord and then to the muscles that power the reach. But just prior to this massively synchronised activity, the motor regions in your brain are relatively quiet. For such self-driven movements, a key piece of the “go” signal that tells the neurons precisely when to act has been revealed in the form of slow ramping up of dopamine in a region deep below the cortex which closely predicted the moment that mice would begin a movement — seconds into the future (Hamilos et al. 2021).

 

The authors imaged mesostriatal dopamine signals as mice decided when, after a cue, to retrieve water from a spout. Ramps in dopamine activity predicted the timing of licks. Fast ramps preceded early retrievals, slow ones preceded late ones. Surprisingly, dopaminergic signals ramped-up over seconds between the start-timing cue and the self-timed movement, with variable dynamics that predicted the movement/reward time on single trials. Steeply rising signals preceded early lick-initiation, whereas slowly rising signals preceded later initiation. Higher baseline signals also predicted earlier self-timed movements. Consistent with this view, the dynamics of the slowly evolving endogenous dopaminergic signals quantitatively predicted the moment-by-moment probability of movement initiation on single trials. The authors propose that ramping dopaminergic signals, likely encoding dynamic reward expectation, can modulate the decision of when to move.

 

Slowly varying neuromodulatory signals could allow the brain to adapt to its environment. Such flexibility wouldn’t be afforded by a signal that always led to movement at the exact same time. Allison Hamilos notes: “The animal is always uncertain, to some extent, about what the true state of the world is. You don’t want to do things the same way every single time — that could be potentially disadvantageous.”

 

This introduces further complexity into the entire pursuit of Libet's readiness potential, which is clearly not itself the defining event, which rather is at first call concealed in a slowly varying dopamine modulation, which in itself does not determine the timing of the event except on a probabilistic basis. Furthermore the striatum itself is a gatekeeper in the basal ganglia for coordinating the underlying conscious decision to act and not the conscious decision itself.

 

Celia Green and Grant Gillett (1995) have also cited three grounds for the readiness potential to be unreliable:

 

First, there is a dual assumption that an intention is the kind of thing that causes an action and that can be accurately introspected. Second, there is a real problem with the method of timing the mental events concerned given that Libet himself has found the reports of subjects to be unreliable in this regard. Third, there is a suspect assumption that there are such things as timable and locatable mental and brain events accompanying and causing human behaviour.

 

Catherine Reason (2016), drawing on Caplain (1996, 2000) and Luna (2016), presents an intriguing logical proof that computing machines, and by extension physical systems, can never be certain if they possess conscious awareness, undermining the principal of computational equivalence (Wolfram 2002, 2021):

 

An omega function is any phi-type function which can be performed, to within a quantified level of accuracy, by some

conscious system. A phi-type function is any mappng which associates the state of some system with the truth value of some proposition.  This significance of this is that it can be shown that no purely physical system can perform any phi-type function to within any quantified level of accuracy, if that physical system is required to be capable of humanlike reasoning.

 

The proof is as follows:  Let us define a physical process as some process whose existence is not dependent on some observation of that process.  Now let X be the set of all physical processes necessary to perform any phi-type function.   Since the existence of X is not dependent on any given observation of X,  it is impossible to be sure empirically of the existence of X.  If it is impossible to be sure of the existence of X, then it is impossible to be sure of the accuracy of X.  If it is impossible to be sure of the accuracy of X, then it is impossible to be sure that X correctly performs the phi-type function it is supposed to perform.  Since any system capable of humanlike reasoning can deduce this, it follows that no physical system capable of humanlike reasoning can perform any phi-type function without becoming inconsistent.

 

Counterintuitively, this implies that human consciousness is associated with a violation of energy conservation. It also provides another objection to Libet:

 

“even if the readiness potential can be regarded as a predictor of the subjects decision in a classical system, it cannot necessarily be regarded as such in a quantum system. The reason is that the neurological properties underlying the readiness potential may not actually have determinate values until the subject becomes consciously aware of their decision”.

 

In subsequent papers (Reason 2019, Reason & Shah 2021) she expands this argument:

 

I identify a specific operation which is a necessary property of all healthy human conscious individuals — specifically the operation of self-certainty, or the capacity of healthy conscious humans to “know” with certainty that they are conscious. This operation is shown to be inconsistent with the properties possible in any meaningful definition of a physical system.

 

In an earlier paper, using a no-go theorem, it was shown that conscious states cannot be comprised of processes that are physical in nature (Reason, 2019). Combining this result with another unrelated work on causal emergence in physical systems (Hoel, Albantakis and Tononi, 2013), we show in this paper that conscious macrostates are not emergent from physical systems and they also do not supervene on physical microstates.

 

Pivotally in a forthcoming formalisation of the argument, Reason (2023) cites Descartes' "cogito ergo sum" as counter example requiring human consciousness so the success of her theorem also frees Cartesian duality from Ryle's deathly grip.

 

In a counterpoint to this Travers et al. (2020) suggest the RP is associated with learning and thus reflects motor planning or temporal expectation, but neither planning nor expectation inform about the timing of a decision to act:

 

“Participants learned through trial and error when to make a simple action. As participants grew more certain about when to act, and became less variable and stochastic in the timing of their actions, the readiness potential prior to their actions became larger in amplitude. This is consistent with the proposal that the RP reflects motor planning or temporal expectation. … If the RP reflects freedom from external constraint, its amplitude should be greater early in learning, when participants do not yet know the best time to act. Conversely, if the RP reflects planning, it should be greater later on, when participants have learned, and know in advance, the time of action. We found that RP amplitudes grew with learning, suggesting that this neural activity reflects planning and anticipation for the forthcoming action, rather than freedom from external constraint.”

 

Fifel (2018) reviewing the state of the current research described the following picture:

 

Results from Emmons et al. (2017) suggest that such ramping activity en- codes self-monitored time intervals. This hypothesis is particularly pertinent given that self-monitoring of the passing of time by the experimental subjects is intrinsic to the Libet et al. (1983) experiment. Alternatively, although not mutually exclusive, RP might reflect general anticipation (i.e., the conscious experience that an event will soon occur) (Alexander et al., 2016) or simply background neuronal noise (Schurger et al., 2016). Future studies are needed to test these alternatives. … Consequently, we might conclude that: Neuroscience may in no way interfere with our first-person experience of the will, it can in the end only describe it ... it leaves everything as it is.

 

The difficulty of the hard problem, which remains unresolved 26 years later, is also tied to the likewise unresolved problem of assumed causal closure of the universe in the context of the brain at the basis of pure materialistic neuroscience. Until it is empirically confirmed it remains simply a matter of opinion that has grown into a belief system academically prejudiced against hypotheses not compliant with the physical materialistic weltanshauung.

 

While some neuroscientists (Johnson 2020) imply the hard problem is not even a scientific question, the neuroscience concept of causal closure (Chalmers 2015) based on classical causality, or quantum correspondence to it, not only remains empirically unverified in the light of Libet, Schurger and others, but it is unclear that a convincing empirical demonstration is even possible, or could be, given the fact that neuronal feedback processes span all scales from the organism to the quantum uncertain realm and the self-organised criticality of brain dynamics. Finally, it is in manifest conflict with all empirical experience of subjective conscious volitional intent universal to sane human beings.

 

As Barnard Baars commented in conversation:

 

I don't think science needs to, or CAN prove causal closure, because what kind of evidence will prove that? We don't know if physics is "causally closed," and at various times distinguished physicists think they know the answer, but then it breaks down. The Bertrand Russell story broke down, and the Hilbert program in math, and ODEs, and the record is not hopeful on final solutions showing a metaphysically closed system .

 

The status of the neuroscience perspective of causal closure has led to an ongoing debate about the efficacy of human volition and the status of free will (Nahamias 2008, Mele, 2014), however Joshua Shepherd (2017) points out, that the neuroscientific threat to free will has not been causally established, particularly in the light of Schurger et al. (2015).

 

For this reason, in treating the hard problem and volitional intent, I will place the onus on proof on materialism to demonstrate itself and in defence of volition have simply outlined notable features of central nervous processing, consistent with an in principle capacity to operate in a quantum-open state of seamless partial causal closure involving subjectively conscious efficacy of volitional will physically in decision-making (in the brain) and behaviour (in the world). From this point of view, efficacy of volition is itself a validated empirical experience which is near universal to sane conscious humans, thus negating causal closure by veridical affirmation in the framework of symbiotic existential cosmology, where empirical experience has equally valid cosmological status to empirical observation.

 

Libet’s experiment purported to demonstrate an inconsistency, by implying the brain had already made a decision before the conscious experience of it, but Trevena and Miller and Schurger’s team have deprecated this imputation.

 

Emergence, Weak, Edge-of-chaos and Strong

 

Key to the question of conscious volition is the profound difference between the notions of strong and weak emergence.  Turkheimer et al. (2019) spell out the difference between these two:

 

Modern Emergence can be divided into two epistemological types: strong and weak. A system is said to exhibit strong emergence when its behaviour, or the consequence of its behaviour, exceeds the limits of its constituent parts. Thus the resulting behavioural properties of the system are caused by the interaction of the different layers of that system, but they cannot be derived simply by analysing the rules and individual parts that make up the system. Weak emergence on the other hand, differs in the sense that whilst the emergent behaviour of the system is the product of interactions between its various layers, that behaviour is entirely encapsulated by the confines of the system itself, and as such, can be fully explained simply though an analysis of interactions between its elemental units.

 

They note that he kind of emergence that surfaced first in the neurosciences was greatly shaped by earlier work of Roger Sperry (1980), who proposed a view of the brain characterised by a strong top-down organisational component. Sperry was adamant that his model did not imply any form of mind brain dualism nor a parallel existence of neurobiological and mental processes, but that, after emergence, mental processes would take over and exert control down to the cellular level:

 

It is the idea, in brief, that conscious phenomena as emergent functional properties of brain processing exert an active control role as causal detents in shaping the flow patterns of cerebral excitation. Once generated from neural events, the higher order mental patterns and programs have their own subjective qualities and progress, operate and interact by their own causal laws and principles which are different from and cannot be reduced to those of neurophysiology.

 

Emergence is not just an assumed property of human brains but applies more generally to systems notions of the emergence of living systems generally including ideas like autopoiesis and the question of whether biological laws are entirely reductionist or form more general fundamental constraint on the behaviour of natural systems.

 

Physical materialism rejects any form of strong emergence that asserts known physical laws can somehow be overridden by mental processes. For example weak emergence allows for a reductionistic  computational paradigm of brain dynamics to putatively replicate the functional agency of an autonomous system through feedback processes between the environment and the organism, so that any purely physicalist descriptions, from artificial intelligence, to ideas like Dennett’s multiple drafts model of consciousness, in the next section fit within the pure physicalist regime.

 

Symbiotic Existential Cosmology invokes primal subjectivity as a foundational cosmological complement to the physical universe that is ultimately compliant with physical boundary conditions, and so poses no conflicts between subjective panpsychic qualia and empirical physics and neuroscience, but it is not a form of passive mentalism (Carroll 2021) as it is conceived as interacting with physical uncertainty. It also cites the eucaryote endo-symbiosis as an emergent topological transition, in which the excitable membrane and neurotransmitter-based social signalling in single celled species, enabled the form of subjective conscious sentience and volition we see in all eucaryotes today.

 

This emergent transition sits right on the boundary between strong and weak emergence, as a form of quantum edge-of-chaos emergence that affirms subjective conscious volition. Having efficacy over the physical universe, in much the same way Sperry originally cited in 1980, but without claiming to violate established physical laws. This is because it focuses on the indeterminacy of the quantum universe and collapse of the wave function in biological systems as a key avenue through which subjective conscious volition can be physically efficacious and yet consistent with the known laws of physics, by citing interpretations such as transactional super-causality and super-determinism to provide processes below the classical level to both explain quantum indeterminacy and conscious intentional will in one step, without violating the Born probability interpretation.

 

Hopeful Monster 1: Virtual Machines v Cartesian Theatres

 

Reductionistic descriptions attempting to explain subjective experience objectively frequently display similar pitfalls to creationist descriptions of nature, and those in Biblical Genesis, which project easy, familiar concepts, such as human manufacture breath, or verbal command onto the natural universe.

 

Paul  Churchland (1985) makes  a definitive play for a reductionistic paradigm based on promissory materialism,  that the emerging neuroscience description will eclipse and supplant our subjective "folk psychology" views of conscious experience in a utopian vision of neuroscience ascendent:

 

Consider now the possibility of learning to describe, conceive, and introspectively apprehend the teeming intricacies of our inner lives within the conceptual framework of a matured neuroscience, a neuroscience that successfully reduces, either smoothly or roughly, our common-sense folk psychology. Suppose we trained our native mechanisms to make a new and more detailed set of discriminations, a set that corresponded not to the primitive psychological taxonomy of ordinary language, but to some more penetrating taxonomy of states drawn from a completed neuroscience. And sup- pose we trained ourselves to respond to that reconfigured discriminative activity with judgments that were framed, as a matter of course, in the appropriate concepts from neuroscience.'

 

If the examples of the symphony conductor (who can hear the Am7 chords), the oenologist (who can see and taste the glycol), and the astronomer (who can see the temperature of a blue giant star) provide a fair parallel, then the enhancement in our introspective vision could approximate a revelation. Dopamine levels in the limbic system, the spiking frequencies in specific neural pathways, resonances in the nth layer of the occipital cortex, inhibitory feed- back to the lateral geniculate nucleus, and countless other neuro-physical niceties could be moved into the objective focus of our introspective discrimination, just as Gm7 chords and Adim chords are moved into the objective focus of a trained musician's auditory discrimination. We will of course have to learn the conceptual framework of a matured neuroscience in order to pull this off. And we will have to practice its non-inferential application. But that seems a small price to pay for the quantum leap in self-apprehension.

 

All of this suggests that there is no problem at all in conceiving the eventual reduction of mental states and properties to neurophysiological states and properties. A matured and successful neuroscience need only include, or prove able to define, a taxonomy of kinds with a set of embedding laws that faithfully mimics the taxonomy and causal generalizations of folk psychology. Whether future neuro-scientific theories will prove able to do this is a wholly empirical question, not to be settled a priori. The evidence for a positive answer is substantial and familiar, centering on the grow- ing explanatory success of the several neurosciences.

 

But there is negative evidence as well: I have even urged some of it myself ("Eliminative Materialism and the Propositional Attitudes," op. cit.). My negative arguments there center on the explanatory and predictive poverty of folk psychology, and they question whether it has the categorial integrity to merit the reductive preservation of its familiar ontology. That line suggests substantial revision or outright elimination as the eventual fate of our mentalistic ontology. The qualia-based arguments of Nagel, Jackson, and Robinson, however, take a quite different line. They find no fault with folk psychology. Their concern is with the explanatory and descriptive poverty of any possible neuroscience, and their line suggests that emergence is the correct story for our mentalistic ontology. Let us now examine their arguments.

 

John Searle (1980) devised his famous "Chinese Room" as a counterexample to machine having consciousness and intentionality. He supposed that artificial intelligence research had succeeded in constructing a computer large language model that behaves as if it understands Chinese, just as chatGPT now does and that it performs its task so convincingly that it comfortably passes the Turing test, convincing a human Chinese speaker that the program is itself a live Chinese speaker. Searle then uses the English version of the algorithm to replicate its performance manually in Chinese without being aware in  "mind", "understanding", or "consciousness", of the actual language responses or their meaning, thus demonstrating that even though intentionality in human beings (and animals) my be an empirical  product of causal features about the relations between mental processes and brains, running a computer program is never by itself a sufficient condition of intentionality. The argument is directed against the philosophical functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence.

 

P & P Churchland (1981) in response present a bleak landscape of the mind, lacking any intentionality over a machine:

 

Functionalism - construed broadly as the thesis that the essence our psychological states resides in the abstract causal roles they play in a complex economy of internal states mediating environment inputs and behavioral outputs - seems to us to be free from any fatal or essential shortcomings ... The correct strategy is to argue that our own mental states are just as innocent of "intrinsic intentionality" as are the states of any machine simulation. On our view, all ascriptions of meaning or propositional content are relative - in senses to be explained. The notion of "intrinsic intentionality" (Searle 1980) makes no more empirical sense than does the notion of position in absolute space.

 

In his reductionist account in “Consciousness Explained” Daniel Dennett (1991) cites his “multiple drafts” model of brain processing, as a case of evolutionary competition among competing neural assemblies, lacking overall coherence, thus bypassing the need for subjective consciousness. This exposes a serious problem of conceptual inadequacy with reductionism. Daniel is here writing his book using the same metaphors as the very activities he happens to be using – the message is thus the medium. He can do this as a subjectively conscious being only by suppressing the significance of virtually every form of coherent conscious experience around him, subjugating virtually all features of his conscious existence operating for 100% of his conscious life, in favour of a sequence of verbal constructs having little more explanatory value than a tautology. This is what I call the psychosis of reductionistic materialism, which is shared by many AI researchers and cognitive scientists.

 

Despite describing the mind as a virtual machine, Dennett & KInsbourne (1995) do concede a conscious mind exists at least as an observer:

 

Wherever there is a conscious mind, there is a point of view. A conscious mind is an observer, who takes in the information that is available at a particular (roughly) continuous sequence of times and places in the universe. ... It is now quite clear that there is no single point in the brain where all information funnels in, and this fact has some far from obvious consequences.

 

But neuroscience has long ceased talking about a single point or single brain locus responsible for consciousness, which is associated with coherent “in phase” activity as a whole. Nevertheless Dennett attempts to mount a lethal attack on any coherent manifestation of subjectivity, asserting there is no single, constitutive "stream of consciousness”:

 

“The alternative, Multiple Drafts model holds that whereas the brain events that discriminate various perceptual contents are distributed in both space and time in the brain, and whereas the temporal properties of these various events are determinate, none of these temporal properties determine subjective order, since there is no single, constitutive "stream of consciousness" but rather a parallel stream of conflicting and continuously revised contents” (Dennett & KInsbourne (1995).

 

“There is no single, definitive "stream of consciousness," because there is no central Headquarters, no Cartesian Theatre where "it all comes together" for the perusal of a Central Meaner. Instead of such a single stream (however wide), there are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. Most of these fragmentary drafts of "narrative" play short-lived roles in the modulation of current activity but some get promoted to further functional roles, in swift succession, by the activity of a virtual machine in the brain. The seriality of this machine (its "von Neumannesque" character) is not a "hard-wired" design feature, but rather the upshot of a succession of coalitions of these specialists.” (Dennett 1991)

 

However we know and shall discuss in the context of the default mode network in the context of psychedelics, the balance between top-down processes of control and integration, against just such a flood of competing regional bottom-up excitations, which become more able to enter consciousness, because of lowered barriers under the drug. 

 

Fig 87a: Homunculus argument infinite regress.

 

Dennett says that, when the dualism is removed, what remains of Descartes' interactionist model amounts to imagining a tiny theatre in the brain where a homunculus (small person), now physical, performs the task of observing all the sensory data projected on a screen at a particular instant, making the decisions and sending out commands. The homunculus argument cites this as a philosophical regress which doesn’t solve the problem but recurses it ad infinitum.

 

The term "Cartesian theatre" was brought up in the context of the multiple drafts model that Dennett posits in Consciousness Explained (1991 107):

 

Cartesian materialism is the view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of "presentation" in experience because what happens there is what you are conscious of. ... Many theorists would insist that they have explicitly rejected such an obviously bad idea. But ... the persuasive imagery of the Cartesian Theater keeps coming back to haunt us—laypeople and scientists alike—even after its ghostly dualism has been denounced and exorcized.

 

Yet the ghost Dennett claims to have crushed just keeps coming back to haunt him:

  

Fig 84: Baars’ (1997) view of the Cartesian theatre of consciousness has genuine explanatory power about the easy problem of the relation between peripheral unconscious processing and integrated coherent states associated with consciousness.

 

Bernard Baars(1997) global workspace theory, in the form of the actors in the Cartesian theatre of consciousness, is creatively provocative of the psyche, and concedes a central role for consciousness. His approach suggests that consciousness is associated with the whole brain, in integrated coherent activity and is thus a property of the brain as a whole functioning entity, in relation to global workspace, rather than arising from specific subsystems.

 

Furthermore, the approach rather neatly identifies the distinction between unconscious processing and conscious experience, in the spotlight of attention, accepts conscious experience as a central arena consistent with whether a given dynamic is confined to asynchronous regional activity or is part of a coherent global response. But again this description is an imaginative representation of Descarteshomunculus in the guise of a Dionysian dramatic production, so it is also a projection onto subjective consciousness, albeit a more engaging one.

 

Lenore and Manuel Blum (2021) have developed a theoretical model of conscious awareness designed in relation to Baars' global workspace theory that applies as much to a computer as an organism:

 

Our view is that consciousness is a property of all properly organized computing systems, whether made of flesh and blood or metal and silicon. With this in mind, we give a simple abstract substrate-independent computational model of consciousness. We are not looking to model the brain nor to suggest neural correlates of consciousness, interesting as they are. We are looking to understand consciousness and its related phenomena.

 

Essentially the theory builds on the known feedbacks between peripheral unconscious processing and short term memory and the spotlight of conscious attention, paraphrasing these in purely computational terms, utilising a world model that is updated, notions corresponding to "feelings" and even "dream creation", in which a sleep processor alters the modality of informational chunking.

 

While it is possible to conceive of such analogous models it remains extremely unlikely that any such computational model can capture the true nature of subjective consciousness. By contrast with a Turing machine which operates discretely and serially on a single mechanistic scale, biological neurosystems operate continuously and discretely on fractal scales from the quantum level through molecular, subcellular dynamics up to global brains states, so it remains implausible in the extreme that such computational systems however complex in structural design can replicate organismic subjective consciousness. The same considerations apply to artificial neural net designs which lack the fractal edge of chaos dynamic of biological neurosystems.

 

Another discovery pertinent here (Fernandino et al. (2022) is that a careful neuroscientific study has found that lexical semantic information can be reliably decoded from a wide range of heteromodal cortical areas in the frontal, parietal, and temporal cortex, but that in most of these areas, they found a striking advantage for experience-based representational structures (i.e., encoding information about sensory-motor, affective, and other features of phenomenal experience), with little evidence for independent taxonomic or distributional organisation. This shows that experience is the foundational basis for conceptual and cognitive thought, giving it a primary universal status over rational or verbal thought.

 

Jo Edwards in personal communication notes that Peter Slezak (2023) claims that many of the current theories of mind suffer from abstract versions of the homunculus regress due to the invocation of a universal Spectator:

 

I have greatly enjoyed reading Peter Slezak's new book Spectator in the Cartesian Theater. I do not agree with everything he proposes but I found his central thesis very powerful. He says the flaw in so many arguments about mind, often present in both sides of many interminable debates about mind, is the illegitimate invocation of an intelligent Spectator. Everyone is aware of the danger of positing a God's eye view. Slezak is saying that even more pernicious is the assumption of an intelligent or omniscient Spectator view in which we use our own external powers to explain the internal mechanisms of another case of those same powers.

 

The way he sees this panning out is varied and complex but a simple example is that many modern theories of mind, rather than positing an illegitimate 'homunculus' to solve the insoluble, posit a dynamic schema that would need such a homunculus to work but deny there is any homunculus , instead using the powers of the external spectator to do the magic. So theories from Tononi, Dehaene, Crick, Kosslyn, McFadden, Fodor, Millikan, you name it, all in one way or another commit the true 'homunculus fallacy' by tacitly trying to explain the inside workings of an intelligent mind by slipping another intelligent mind's powers inside.

 

Consciousness and Broad Integrated Processing: The Global Neuronal Workspace (GNW) model

 

Stanislas Dehaene and Jean-Pierre Changeux (2008, 2011) have combined experimental studies and theoretical models, including Baars' global workspace theory to address the challenge of establishing a causal link between subjective conscious experience and measurable neuronal activity in the form of the the Global Neuronal Workspace (GNW) model according to which conscious access occurs when incoming information is made globally available to multiple brain systems through a network of neurons with long-range axons densely distributed in prefrontal, parieto-temporal, and cingulate cortices.

 

Converging neuroimaging and neurophysiological data, acquired during minimal experimental contrasts between conscious and nonconscious processing, point to objective neural measures of conscious access: late amplification of relevant sensory activity, long-distance cortico-cortical synchronization at beta and gamma frequencies, and ‘ignition’ i.e. "lighting up" of a large-scale prefronto-parietal network. By contrast, as shown in fig 86, states of reduced consciousness have large areas of cortical metabolic deactivation.

 

Fig 85: Both fMRI (1) and (2) EEG/MEG data, show broad activation across diverse linked cortical regions, when non-conscious processing rises to the conscious level. Likewise local feed forward propagation (3) leads to reverberating cortical connections. These influences are combined in the GRW model (4) in which Baars’ global workspace theatre becomes a more precisely defined model attempting to solve several of the easier problems of consciousness into a globally resonant network theory.

 

In conclusion, the authors look ahead to the quest of understanding the conscious brain and what it entails:

 

The present review was deliberately limited to conscious access. Several authors argue, however, for additional, higher-order concepts of consciousness. For Damasio and Meyer (2009), core consciousness of incoming sensory information requires integrating it with a sense of self (the specific subjective point of view of the perceiving organism) to form a representation of how the organism is modified by the information; extended consciousness occurs when this representation is additionally related to the memorized past and anticipated future (see also Edelman, 1989). For Rosenthal (2004), a higher-order thought, coding for the very fact that the organism is currently representing a piece of information, is needed for that information to be conscious. Indeed, metacognition, or the ability to reflect upon thoughts and draw judgements upon them is often proposed as a crucial ingredient of consciousness. In humans, as opposed to other animals, consciousness may also involve the construction of a verbal narrative of the reasons for our behavior (Gazzaniga et al., 1977).

 

Fig 86: Top: Conscious brain states are commonly associated with phase correlated global cortical activity. Conscious brain activity in healthy controls is contrasted with diminished cortical connectivity of excitation in unaware and minimally conscious states (Demertzi et al. 2019).  Bottom: Reduced metabolism during loss of consciousness (Dehaene & Changeux J 2011).

 

In the future, as argued by Haynes (2009), the mapping of conscious experiences onto neural states will ultimately require not only a neural distinction between seen and not-seen trials, but also a proof that the proposed conscious neural state actually encodes all the details of the participant’s current subjective experience. Criteria for a genuine one-to-one mapping should include verifying that the proposed neural state has the same perceptual stability (for instance over successive eye movements) and suffers from the same occasional illusions as the subject’s own report.

 

However, decoding the more intermingled neural patterns expected from PFC and other associative cortices is clearly a challenge for future research. Another important question concerns the genetic mechanisms that, in the course of biological evolution, have led to the development of the GNW architecture, particularly the relative expansion of PFC, higher associative cortices, and their underlying long-distance white matter tracts in the course of hominization. Finally, now that measures of conscious processing have been identified in human adults, it should become possible to ask how they transpose to lower animal species and to human infants and fetuses.

 

In "A better way to crack the brain”, Mainen, Häusser & Pouget (2016) cite novel emerging technologies such as optogenetics as tools likely to eclipse the overriding emphasis on electrical networking data, but at the same time illustrate the enormity of the challenge of neuroscience attempting to address consciousness as a whole.

 

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics.

In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

 

Fig 8: Optogenetic images of pyramidal cells in a rodent cortex.

 

Several advances over the past decade have made it vastly more tractable to solve funda- mental problems such as how we recognize objects or make decisions. Researchers can now monitor and manipulate patterns of activity in large neuronal ensembles, thanks to new technologies in molecular engineering, micro-electronics and computing. For example, a combination of advanced optical imaging and optogenetics can now read and write patterns of activity into populations of neurons. It is also possible to relate firing patterns to the biology of the neurons being recorded, including their genetics and connectivity.

 

 

Several advances over the past decade have made it vastly more tractable to solve fundamental problems such as how we recognize objects or make decisions. Researchers can now monitor and manipulate patterns of activity in large neuronal ensembles, thanks to new technologies in molecular engineering, micro- electronics and computing. For example, a combination of advanced optical imaging and optogenetics can now read and write patterns of activity into populations of neurons . It is also possible to relate firing patterns to the biology of the neurons being recorded, including their genetics and connectivity.

 

However none of these are coming even close to stitching together a functional view of brain processing that comes anywhere near to solving the hard problem or even establishing causal closure of the universe in the context of brain function, given the extreme difficulty of verifying classical causality in every brain process and the quantum nature of all brain processes at the molecular level. Future prospects for solving the hard problem via the easy ones thus remain unestablished.

  

Hopeful Monster 2: Consciousness and Surviving in the Wild v Attention Schema Theory

 

Real world survival problems in the open environment don’t necessarily have a causally-closed or even a computationally tractable solution, due to exponential runaway like the travelling salesman problem, thus requiring sensitive dependence on the butterfly effect and intuitive choices. Which route should the antelope take to reach the water hole when it comes to the fork in the trail? The shady path where a tiger might lurk, or the savannah where there could be a lion in the long grass? All the agents are conscious sentient beings using innovation and stealth and so  computations depending on reasoned memory are unreliable because the adversaries can also adapt their strategies and tactics to frustrate the calculations. The subtlest sensory hints of crisis amid split-second timing is also pivotal. There is thus no tractable solution. Integrated anticipatory intuition, combined with a historical knowledge of the terrain, appears to be the critical survival advantage of sentient consciousness in the prisoners’ dilemma of survival, just as sexuality is, in the Red Queen race (Ridley 1996) between hosts and parasites. This coherent anticipation possessed by subjective consciousness appears to be the evolutionary basis for the emergence and persistence of subjective consciousness as a quantum-derived form of anticipation of adventitious risks to survival, not cognitive processes of verbal discourse.

 

Michael Graziano’s (2016, 2017, Webb & Graziano 2015), attention schema theory, or AST, self-described as a mechanistic account of subjective awareness which emerged in parallel with my own work (King 2014), gives an account of the evolutionary developments of the animal brain, taking account of the adaptive processes essential for survival to arrive at the kind of brains and conscious awareness we experience: 

 

We propose that the topdown control of attention is improved when the brain has access to a simplified model of attention itself. The brain therefore constructs a schematic model of the process of attention, the attention schema,in much the same way that it constructs a schematic model of the body, the body schema.The content of this internal model leads a brain to conclude that it has a subjective experiencea non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim”.

 

Fig 91: Which route should the antelope take to reach the water hole when it comes to the fork in the trail? The shady path where a tiger might lurk, or the savannah where there could be a lion in the long grass? Real world survival problems require intuitive multi-option decisions, creativity and and often split-second timing requiring anticipatory consciousness. Thus modelling the existence of subjective consciousness or otherwise based only on causal concepts and verbal reasoning processes gives a false evolutionary and cosmological view. Here is where the difference between a conscious organism and an AI robot attempting to functionally emulate it is laid bare in tooth and claw.

 

However, in Graziano (2013, 2019), he advocates illusionism the idea that subjective consciousness and volitional will are a self-fulfilling evolutionary delusion, so that the author believes AST as a purely mechanistic principle could in principle be extended to a machine without the presence of subjective consciousness: “Such a machine would believeit is conscious and act like it is conscious, in the same sense that the human machine believes and acts. 

 

It remains unclear that a digital computer, or AI process can achieve this notwishding chatGPT.  Ricci et al. (2021) note in concluding remarks towards one of the most fundamental and elementary tasks, abstract same-different discrimination:  The aforementioned attention and memory network models are stepping stones towards the flexible relational reasoning that so epitomizes biological intelligence. However, current work falls short of the — in our view, correct — standards for biological intelligence set by experimentalists like Delius (1994) or theorists like Fodor (1988).

 

Yet AST is a type of filter theory similar to Huxley’s ideas about consciousness, so it invokes a principle of neural organisation that is consistent with and complementary to subjective consciousness: “Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence.

 

The overall idea of a purely physical internal model of reality representing its own attention process, thus enabling it to observe itself, is an astute necessary condition for the sort of subjective consciousness we find in the spread of metazoa, but it is in no way sufficient to solve the hard problem or address any more than the one easy problem it addresses, about recursive attention. However its description, of fundamental changes in overall brain architecture summarised in Graziano (2016) highlights the actual evolutionary forces shaping the development of the conscious mind lie in the paranoia of survival the jungle as noted in fig 91, rather than the verbal contortions of philosophical discourse:

 

 “If the wind rustles the grass and you misinterpret it as a lion, no harm done.
But if you fail to detect an actual lion, youre taken out of the gene pool” (Michael Graziano 2016).

 

However Graziano (2020), in claiming why AST “has to be right”, commits to de-subjectifying  consciousness in favour of an AI analysis of recursive attention systems. In relation to the reality of consciousness in his words, the claim that: I have a subjective, conscious experience. Its real; its the feeling that goes along with my brains processing of at least some things. I say I have it and I think I have it because, simply, I do have it. Let us accept its existence and stop quibbling about illusions”, he attempts a structural finesse based on recursive attention:

 

Suppose the brain has a real consciousness. Logically, the reason why we intuit and think and say we have consciousness is not because we actually have it, but must be because of something else; it is because the brain contains information that describes us having it. Moreover, given the limitations on the brains ability to model anything in perfect detail, one must accept that the consciousness we intuit and think and say we have is going to be different from the consciousness that we actually have. . … I will make the strong claim here that this statement the consciousness we think we have is different from, simpler than, and more schematic than, the consciousness we actually have is necessarily correct. Any rational, scientific approach must accept that conclusion. The bane of consciousness theorizing is the naïve, mistaken conflation of what we actually have with what we think we have. The attention schema theory systematically unpacks the difference between what we actually have and what we think we have. In AST, we really do have a base reality to consciousness: we have attention the ability to focus on external stimuli and on internal constructs, and by focusing, process information in depth and enable a coordinated reaction. We have an ability to grasp something with the power of our biological processor. Attention is physically real. Its a real process in the brain, made out of the interactions of billions of neurons. The brain not only uses attention, but also constructs information about attention a model of attention. The central hypothesis of AST is that, by the time that information about attention reaches the output end of the pathway … , were claim-ing to have a semi-magical essence inside of us conscious awareness. The brain describes attention as a semi-magical essence because the mechanistic details of attention have been stripped out of the description.

 

These are simply opinions of a hidden underlying information structure, confusing conscious experience itself with the recursive attention structures that any realistic description has to entail to bring physical brain processing into any kind of concordance with environmental reality. His inability to distinguish organismic consciousness from AI is evidenced in Graziano (2017)  where he sets out AST as a basis for biologically realisable artificial intelligence systems.

 

The actual answer to this apparent paradox that leaves our confidence in our conscious volition in tatters, is that the two processes, neural net attention schemes and subjective consciousness have both been selected by evolution to ensure survival of the organism from existential threats and they have done so as complementary processes. Organismic brains evolved from the excitable sentience of single-celled eucaryotes and their social signalling molecules that became our neurotransmitters a billion yers after these same single-celled eucaryotes had to solve just these problems of growth and survival in the open environment. Brains are thus built as an intimately coupled society of eucaryote excitable cells communicating by both electrochemical and biochemical means via neurotransmitters, in such a way that the network process is an evolutionary elaboration of the underlying cellular process, both of which have been conserved by natural selection because both contribute to organismic survival by anticipating existential threats.

 

This is the only possible conclusion, because the presence of attention schemae does not require the manifestation of subjective consciousness to the conscious participant unless that too plays an integral role in survival of the organism.  Indeed an artificial neural net with recursive schemes would do just that and have no consciousness implied, as it would be superfluous to energy demands unless it had selective advantage.

 

An adjunct notion is the ALARM theory (Newen & Montemayor 2023), we need to distinguish two levels of consciousness, namely basic arousal and general alertness. Basic arousal functions as a specific alarm system, keeping a biological organism alive under sudden intense threats, and general alertness enables flexible learning and behavioural strategies. This two-level theory of consciousness helps us to account for recent discoveries of subcortical brain activities with a central role of thalamic processes, and observations of differences in the behavioural repertoire of non-human animals indicating two types of conscious experiences. The researchers claim his enables them to unify the neural evidence for the relevance of sub-cortical processes, and of cortico-cortical loops, on the other, and to clarify the evolutionary and actual functional role of conscious experiences.

 

They derive evidence primarily from two animal studies. In Afrasiabi et al. (2021) macaques were anaesthetised, and the researchers stimulated the central lateral thalamus. The stimulation acted as a switch to trigger consciousness. However, it only prompted fundamental arousal because the macaques could feel pain, see things, and react to them, but they were unable, unlike regular wakefulness, to participate in learning tasks. A second experiment, Nakajima et al. (2019), provides evidence mice possess general wakefulness in their daily lives. The animals were trained to respond to a sound differently than to a light signal. They were also capable of interpreting a third signal that indicated whether they should focus on the sound or the light signal. Given that the mice learned this quickly, it is clear that they have acquired learning with focused conscious attention and, therefore, possess general vigilance.

 

In "Homo Prospectus" (Seligman et al. 2016), which asserts that the unrivalled human ability to be guided by imagining alternatives stretching into the future – “prospection” – uniquely describes Homo sapiens, addresses the question of how ordinary conscious experience might relate to the prospective processes that by contrast psychology’s 120-year obsession with memory (the past) and perception (the present) and its absence of serious work on such constructs as expectation, anticipation, and will. Peter Railton cites:

 

Intuition: The moment-to-moment guidance of thought and action is typically intuitive rather than deliberative. Intuitions often come unbidden, and we can seldom explain just where they came from or what their basis might be. They seem to come prior to judgment, and although they often inform judgment, they can also stubbornly refuse to line up with our considered opinions.

Affect: According to the prospection hypothesis, our emotional or affective system is constantly active because we are constantly in the business of evaluating alternatives and selecting among them.

Information: A system of prospective guidance is information-intensive, calling for individuals to attend to many variables and to update their values continuously in response to experience.

 

They also see deliberative cognitive processes as intertwined with and integrated by intuitive processes:

 

One view, which we call the separate processors view, says intuition and deliberation are separate, distinct modes of thought. An opposing view says intuition and deliberation are thoroughly intertwined; deliberation is constructed with intuition as a main ingredient.  On this second view, there aren’t two independent processors. Rather, deliberation depends fundamentally on intuitive affective evaluations.

 

They associate imagination with the wandering mind, which we shall see is identifiable with the default mode network critical in ego dissolution and central to rehearsing survival strategies:

 

Think about what goes consciously through your mind during idle moments. This is mind-wandering, and it is deeply puzzling to theorists. The biggest puzzle is why we do so much of it. One study, which used experience sampling methods with 2,250 adults, found mind-wandering occurred in a remarkable 46.9% of the time points sampled.

 

On free will, the authors dodge the core philosophical debate, assuming that philosophers of all bents do embrace a form of free will, but instead pragmatically introduce the multiple-options question that plagues all environmental survival decisions:

 

We will argue that the distinctive mark of human freedom is latitude. Latitude refers to what agents have when the “size” of their option set is large. For now, we can say an agent has more latitude when the number of distinct options in the option set is larger. A bit later, we will provide a more refined account of how to understand the “size” of an option set.

 

Some anticipatory aspects of our conscious experience of the world make it possible for the brain to sometimes construct a present that has never actually occurred. In the "flash-lag" illusion, a screen displays a rotating disc with an arrow on it, pointing outwards. Next to the disc is a spot of light that is programmed to flash at the exact moment the spinning arrow passes it. Instead, to our experience, the flash lags behind, apparently occurring after the arrow has passed (Westerhoff 2013). One explanation is that our brain extrapolates into the future, making up for visual processing time by predicting where the arrow will be, however, rather than extrapolating into the future, our brain is actually interpolating events in the past, assembling a story of what happened retrospectively, as was shown by a subtle variant of the illusion (Eagleman and Sejnowski 2000).

 

Given the complementary roles of conscious quantum measurement and edge-of-chaos coherence dynamics, far from being an ephemeral state of a biological organisms brain dynamics that is irrelevant to the universe at large, the symbiotic cosmology asserts that consciousness has a foundational role in existential cosmology, complementary to the entire phenomenon of the physical universe. The conscious brain may also literally be a/the most complex functional system in the universe, so manifests emergent properties undeveloped in other physical processes. This is not dualistic, but an extension of quantum wave-particle complementarity to a larger complementarity, in which mind is complementary to the universe as a whole. It is thus non-local in a more complete way than the quantum wave aspect is in complementation to the localised particle aspect.

  

Hopeful Monster 3: Consciousness as Integrated Information

 

Fig 92: variations in recursive connectivity result in varying Φ.

 

Tononi and Koch’s (2015, Tononi et al. 2016) integrated information theory IIT, suggests a similar classification to the dynamical classification in the cosmology, running through states of limited human consciousness such as ketamine anaesthesia down to cephalopods and then Siri, thus invoking AI as putatively conscious if it has the right integrative algorithms. IIT constructs its model by starting from experience itself, establishing its classification via five phenomenological axioms: intrinsic existence, composition, information, integration and exclusion. It predicts that consciousness is graded, is common among biological organisms and can occur in some very simple systems. It will thus discount purely computational AI systems as non-conscious and makes a similar set of distinctions to those in the symbiotic cosmology. However, despite being based on characteristics of conscious behaviour, IIT becomes an abstract study of discrete probabilistic Markov systems, rather than subjectivity itself.

 

However the ground of the theory is probabilistic information, as indicated by its axiomatic definitions: Mechanism – Any subset of elements within a system that has causeeffect power on it (that is, that constrains its causeeffect space). Causeeffect repertoire – The probability distribution of potential past and future states of a system that is specified by a mechanism in its current state. Causeeffect space – A space with each axis representing the probability of each possible past and future state of a system. Causeeffect structure – The set of causeeffect repertoires specified by all the mechanisms of a system in its current state. Integrated information(Φ): Information that is specified by a system that is irreducible to that specified by its parts. It is calculated as the distance between the conceptual structure specified by the intact system and that specified by its minimum information partition.

 

Yaden et al. (2021) point out some of the problems with this kind of model in the context of psychedelics: “Although it would be interesting to investigate how psychedelic states relate to Φ, it is not clear how this would improve our understanding of the hard problem of consciousness.”  They note, for example, that relatively simple digital logic gates (e.g., XOR gate), which intuitively seem non-conscious, can generate large amounts of Φ (Cerullo, 2015) stating “It is also not clear that the assertion of complexity itself being a measure of consciousness is tenable.” This is a natural critique of the IIT model in that, despite being an attempt to reason in the subjective, its sole basis in stochastic information cannot solve the hard problem, although it receives some philosophical support (Fallon & Blackmon 2021).

 

Fig 93: IIT sets out five axioms attempting to produce the principles of how an informational system needs to operate to model what we know of organismic conscious experience. This does make a structural analysis consistent with prominent features of conscious experience as an integrated phenomenon. It is tolerant for example of octopus consciousness. However it’s analysis is based on abstract causal systems which are required to be composed of hierarchies of interacting subsets in a cause-effect repertoire. This effectively creates the same combinations problem associated with attempts to structure panpsychism in the abstract form of Markov systems. This makes such abstract systems subject to the same criticism that the hard problem has with all objective descriptions being categorically incapable of subjectivity.

 

Chris Koch (2014)  in Scientific American, introduces IIT as a form of panpsychism that spans the spread of animal kingdom:

 

All species—bees, octopuses, ravens, crows, magpies, parrots, tuna, mice, whales, dogs, cats and monkeys—are capable of sophisticated, learned, nonstereotyped behaviors that would be associated with consciousness if a human were to carry out such actions. For instance, bees are capable of recognizing specific faces from photographs, can communicate the location and quality of food sources to their sisters via the waggle dance, and can navigate complex mazes with the help of cues they store in short-term memory (for instance, after arriving at a fork, take the exit marked by the color at the entrance). Bees can fly several kilometers and return to their hive, a remarkable navigational performance. And a scent blown into the hive can trigger a return to the site where the bees previously encountered this odor. … Given the lack of a clear and compelling Rubicon separating simple from complex animals and simple from complex behaviors, the belief that only humans are capable of experiencing anything consciously seems preposterous.  A much more reasonable assumption is that until proved otherwise, many, if not all, multicellular organisms experience pain and pleasure and can see and hear the sights and sounds of life.

 

He notes that Darwin himself inquired into the mental powers of earthworms:

 

None other than Charles Darwin, in the last book he published, in the year preceding his death, set out to learn how far earthworms acted consciously and how much mental power they displayed.Studying their feeding and sexual behaviors for several decades—Darwin was after all a naturalist with uncanny powers of observation—he concluded that there was no absolute threshold between lower and higher animals, including humans, that assigned higher mental powers to one but not to the other.

 

As we also know from Smith (1978), that Darwin took this even further to the coelenterates and all animals:

 

"To see a puppy playing [one] cannot doubt that they have free-will" 

and if "all animals, then an oyster has and a polype.” (Darwin)

 

He concedes that the explanatory gap between subjective consciousness and the physical world description cannot be simply bridged by complexity:

 

Yet the mental is too radically different for it to arise gradually from the physical. This emergence of subjective feelings from physical stuff appears inconceivable and is at odds with a basic precept of physical thinking, the Ur-conservation law—ex nihilo nihil fit. So if there is nothing there in the first place, adding a little bit more won't make something. If a small brain won't be able to feel pain, why should a large brain be able to feel the god-awfulness of a throbbing toothache? Why should adding some neurons give rise to this ineffable feeling? The phenomenal hails from a kingdom other than the physical and is subject to different laws. I see no way for the divide between unconscious and conscious states to be bridged by bigger brains or more complex neurons. … A more principled solution is to assume that consciousness is a basic feature of certain types of so-called complex systems (defined in some universal, mathematical manner).

 

He then leaps to panpsychism, as a philosophical viewpoint consistent with these perspectives:

 

Panpsychism is an elegant explanation for the most basic of all brute facts I encounter every morning on awakening: there is subjective experience. Tononi's theory offers a scientific, constructive, predictive and mathematically precise form of panpsychism for the 21st century. It is a gigantic step in the final resolution of the ancient mind-body problem.

 

But he then makes a broad critique of panpsychism’s known deficiencies:

 

Yet, as traditionally conceived, panpsychism suffers from two major flaws. One is known as the problem of aggregates. Philosopher John Searle of the University of California, Berkeley, expressed it recently: Consciousness cannot spread over the universe like a thin veneer of jam; there has to be a point where my consciousness ends and yours begins.Indeed, if consciousness is everywhere, why should it not animate the iPhone, the Internet or the United States of America? Furthermore, panpsychism does not explain why a healthy brain is conscious, whereas the same brain, placed inside a blender and reduced to goo, would not be. That is, it does not explain how aggregates combine to produce specific conscious experience.

 

He then uses panpsychism to launch a justification of Integrated Information Systems as a theory:

 

These century-old arguments bring me to the conceptual framework of the integrated information theory (IIT) of psychiatrist and neuroscientist Giulio Tononi of the University of WisconsinMadison. It postulates that conscious experience is a fundamental aspect of reality and is identical to a particular type of information—integrated information. Consciousness depends on a physical substrate but is not reducible to it. That is, my experience of seeing an aquamarine blue is inexorably linked to my brain but is different from my brain.

 

However, in a detailed review comparing the GNW and IIT theories, Maillé & Lynn (2020) in ”Reconciling Current Theories of Consciousness” conclude that GNW concords more closely with the empirical evidence:

 

Proponents of IIT point to its explanatory power: for instance, it can explain why the cortex is capable of producing conscious experience while the cerebellum is not, even though the cerebellum possesses up to four times more neurons. While the IIT has not received unambiguous validation (possibly due to the abstract nature of its description of consciousness, it provides one of the most detailed accounts for the emergence of conscious experience from an information-processing network.

 

The GNW theory (Dehaene and Changeux, 2011), in contrast to the IIT, was empirically derived from EEG and imaging studies in humans and primates. These studies have shown that when a stimulus is presented but not consciously perceived, activation can be seen mainly in the associated primary sensory cortices. When the stimulus is consciously perceived, however, activation in primary cortical areas is followed by a delayed “neural ignition” in which a sustained wave of activity propagates across pre- frontal and parietal association cortices. According to the GNW model, this allows relevant information to be broadcast across the brain to other subsystems for use in decision-making, reporting, memory consolidation, and other processes.

 

Unfortunately, while both IIT and GNW have obtained experimental support, testable predictions from both theories are seldom compared within the same dataset. In a recent issue of The Journal of Neuroscience, Noel et al. (2019) leveraged a previously published experimental data-set to directly compare IIT and GNW at the single-unit level.

 

Noel et al. (2019) did not find large enough pools of integrative and convergent neurons in vPM to generate sufficient statistical power, so this analysis was restricted to S1 neurons. The authors reasoned that if integrative neurons underlie conscious perception, then their multisensory representations should more closely track the state of consciousness than those of convergent neurons. Contrary to this, 69% of convergent neurons but only 37.1% of integrative neurons changed their multisensory response classification after propofol administration. Noel et al. (2019) additionally considered single-neuron physiological properties, including Lempel–Ziv complexity (a measure of the statistical complexity of stimulus-driven responses) and noise correlations (the amount of shared response variability between neurons). They found that both of these measures were less correlated with consciousness state in integrative neurons than in convergent neurons. Together, these findings argue against the IIT theory of consciousness.

 

Yaden et al. (2021) point out some of the problems with this kind of model in the context of psychedelics: "Although it would be interesting to investigate how psychedelic states relate to Φ, it is not clear how this would improve our understanding of the hard problem of consciousness." They note that relatively simple digital logic gates (e.g., XOR gate), which intuitively seem non-conscious, can generate large amounts of Φ (Cerullo, 2015) stating "It is also not clear that the assertion of complexity itself being a measure of consciousness is tenable." This is a natural critique of the IIT model in that, despite being an attempt to reason in the subjective, its sole basis in stochastic information cannot solve the hard problem, although it receives some philosophical support (Fallon & Blackmon 2021).

 

Bayne and Carter (2018) also critique the model, in dealing with whether conscious states can be assigned levels, exemplified by the idea that psychedelics induce a “higher” state of consciousness. “Advocates of IIT are explicitly committed to the unidimensional view of conscious states, for they equate a creatures conscious state with its level of consciousness, and degrees of consciousness, according to IIT, are in turn understood in terms of the amount of integrated information Φ. The considerations advanced in this paper raise questions about the plausibility of this view, for we have seen that global states cannot be ordered along a single dimension.

 

The only dominant theory we have of consciousness says that it is associated with complexity — with a systems ability to act upon its own state and determine its own fate. Theory states that it could go down to very simple systems. In principle, some purely physical systems that are not biological or organic may also be conscious” (Chris Koch).

 

The final knell comes from Merker et al. (2022):

 

Giulio Tononis integrated information theory (IIT) proposes explaining consciousness by directly identifying it with integrated information. We examine the construct validity of IITs measure of consciousness, phi (Φ), by analyzing its formal properties, its relation to key aspects of consciousness, and its co-variation with relevant empirical circumstances. Our analysis shows that IITs identification of consciousness with the causal efficacy with which differentiated networks accomplish global information transfer (which is what Φ in fact measures) is mistaken. This misidentification has the consequence of requiring the attribution of consciousness to a range of natural systems and artifacts that include, but are not limited to, large-scale electrical power grids, gene-regulation networks, some electronic circuit boards, and social networks. Instead of treating this consequence of the theory as a disconfirmation, IIT embraces it.

 

A more recent study (Cogitate Consortium et al. 2023) compared integrated information theory (IIT) against global network workspace theory (GNWT) and found the results didn’t perfectly match either of the theories. This has since resulted in an open letter treating IIT, in particular as pseudoscience (Fleming et al. 2023, Lenharo 2023b), causing David Chalmers to with a bet against  Christof Koch  that the mechanism by which the brain's neurons produce consciousness would be discovered by 2023.

 

The latest version of the theory, IIT 4.0 (Albantakis et al 2023) incorporates improvements to its formalism. However, Wikipedia notes that, while the theory has already been applied to explain the level and contents of experience in certain situations, it remains in development and still requires further validation and testing in cases where the level and contents of experience are well known (e.g. in awake humans capable of reporting, being exposed to natural sensory stimuli). Despite significant interest, IIT remains controversial and has been widely criticised, including that it is unfalsifiable pseudoscience. Proponents counter that there is some experimental support for it, but the fundamental validity of some of the tests used is questioned by some critics.

 

Anil Seth while supportive of the theory, claiming "conscious experiences are highly informative and always integrated" and that "one thing that immediately follows is that you have a nice post hoc explanation for certain things we know about consciousness, ” also has fundamental caveats "the parts of IIT that I find less promising are where it claims that integrated information actually is consciousness — that there's an identity between the two.

 

On it new axiomatic basis, IIT 4.0 proceeds by considering whether experience—phenomenal existence—has some axiomatic or essential properties, properties that are immediate and irrefutably true of every conceivable experience. Drawing on introspection and reason, IIT identifies the following five:

 

Intrinsicality Experience is intrinsic: it exists for itself.

Information Experience is specific: it is the way it is.

Integration Experience is unitary: it is a whole, irreducible to separate experiences.

Exclusion Experience is definite: it is this whole.

Composition Experience is structured: it is composed of distinctions and the relations that bind them together, yielding a phenomenal structure.

Existence The substrate of consciousness must have causeeffect power: its units must take and make a difference.

 

The authors state that a scientific theory of consciousness should account for experience, which is subjective, in objective terms. Being conscious — having an experience — is understood to mean that there is something it is like to besomething it is like to see a blue sky, hear the ocean roar, dream of a friends face, imagine a melody flow, contemplate a choice, or reflect on the experience one is having.

 

IIT aims to account for phenomenal properties—the properties of experience—in physical terms. IITs starting point is experience itself rather than its behavioral, functional, or neural correlates. Furthermore, in IIT physicalis meant in a strictly operational sense — in terms of what can be observed and manipulated.

 

Building from this zerothpostulate, IIT formulates five axioms in terms of postulates of physical existence:

Intrinsicality The substrate of consciousness must have intrinsic causeeffect power: it must take and make a difference within itself.

Information The substrate of consciousness must have specific causeeffect power: it must select a specific causeeffect state. This state is the one with maximal intrinsic information (ii), a measure of the difference a system takes or makes over itself for a given cause state and effect state.

Integration The substrate of consciousness must have unitary causeeffect power: it must specify its causeeffect state as a whole set of units, irreducible to separate subsets of units.

Irreducibility is measured by integrated information (φ) over the substrates minimum partition.

Exclusion The substrate of consciousness must have definite causeeffect power: it must specify its causeeffect state as this set of units. This is the set of units that is maximally irreducible, as measured by maximum φ (φ). This set is called a maximal substrate, also known as complex.

Composition The substrate of consciousness must have structured causeeffect power: subsets of its units must specify causeeffect states over subsets of units (distinctions) that can overlap with one another (relations), yielding a causeeffect structure or Φ-structure (Phi-structure”).

 

Having determined the necessary and sufficient conditions for a substrate to support consciousness, IIT proposes an explanatory identity: every property of an experience is accounted for in full by the physical properties of the Φ-structure unfolded from a maximal substrate (a complex) in its current state, with no further or ad hoc” ingredients.

 

Immediate criticisms at the axiomatic level render this analysis invalid right from the zeroth postulate, which asserts a priori that “a scientific theory of consciousness should account for experience, which is subjective, in objective terms”, remaining intentionally blind to the nature of subjectivity or its extreme category inconsistency with physicality.

 

The attempts to paraphrase this by asserting that subjectivity expressed as being conscious is “what it is like to be” something is merely to have a physical structure representing it not to have the subjective experience at all. This is a purely functionalist view of consciousness in the sense of “conscious of” and it misrepresents subsequent axioms. Saying experience is “intrinsic” simply means “exists for itself” any autonomous physical process such as a wave on water satisfies this. The fact this it is specifically “the way it is” becomes “information in a tautologous self definition.

 

We then enter into further confining assumptions. Experience is claimed to be structured in the same way objective processes such as machines claiming “it is composed of distinctions and the relations that bind them together” but this does not mean they yield a “phenomenal structure subjectively experienced.

 

The last, existence is revealing. The substrate of consciousness must have cause effect power: its units must take and make a difference. What “taking” a difference is anyone’s guess, but making a difference is having conscious efficacy over the physical world e.g. in intentional behaviour. This confines any such theory to be either pure physicalism, or to be a subjective-objective interactionist theory like Symbiotic Existential Cosmology is.

 

Fig 92c: Identifying substrates of consciousness through the postulates of existence, intrinsicality, information, integration, and exclusion.

 

IIT thus aims to account for phenomenal properties — the properties of experience — in purely physical terms. IITs starting point is experience, but its endpoint is axiomatically physical by identification. In IIT “physicalis meant in a strictly operational sense — in terms of what can be observed and manipulated. Thus abstract mathematical reality probability and information as well is identified with physicality, unilaterally assigning the abstract realm to physics.

  

To seal the fate of the entire exercise, the substrate axioms execute the same series of definitions determining without any escape, that the subjective aspect is now locked by equivalence to the physical. All of the substrate axioms repeat the mantra that each has cause–effect power, except for irreducibility which directly asserts that it “measured by integrated information (φ)” giving integrated information a priori status. Thus the entire statement of IIT is simply a single axiom “subjectivity is physical”, not a consciousness theory at all.

 

This becomes finally apparent in the critical step – the explanatory identity – every property of an experience is accounted for in full by the physical properties of the Φ-structure unfolded from a maximal substrate (a complex) in its current state, with no further or ad hoc” ingredients. In Anil Seth’s words “it claims that integrated information actually IS consciousness — that there's an identity between the two”.

 

If this isn’t a bald statement of mind-brain identity theory, it is completely vacuous. There is no reference whatever to Levine’s (1983) explanatory gap, Chalmers (1995) hard problem, or even the fact that this is an attempt to solve the hard problem by solving a host of functional easy problems but simply stating IIT as an axiom, so that if systems design satisfy the substrate axioms as it is designed to do, then axiomatically, “my dear Watson”, it has unravelled subjective consciousness.

 

Emergence of Organismic Subjective Experience in the Cambrian

 

The foundation of consciousness arises not from the cortex, but from the ascending reticular activating system or ARAS (a), a collection of up to 50 separate brain stem ganglia giving the brain both its resonant excitations and modes of conscious activity from waking through deep and dreaming sleep and promote organismic survival through goal seeking affects ensuring survival. The ARAS, fig 92(d) is composed of several neural circuits connecting the dorsal part of the posterior midbrain and anterior pons to the cerebral cortex via distinct pathways that project through the thalamus and hypothalamus. Consciousness is thus supported by the most “drug-addled” foundation that also goes back in evolutionary terms to our single-celled eucaryote ancestors.

 

Fig 92d: Key components of the ascending arousal system.  

 

The thalamic pathway (orange) consists primarily of cholinergic acetyl-choline neurons (aCH) in the pedunculopontine (PPT) and laterodorsal tegmental nuclei (LDT), which excite the thalamo-cortical-thalamic electrical and glutaminergic resonances (b) accompanying consciousness (Shepherd & Yamawaki 2021).

 

The hypothalamic pathway (red) is composed primarily of neurons that release adrenergic monoamine neurotransmitter modulators, dopamine (DA), norepinephrine (NA), serotonin (5HT), and histamine (His). These send neuromodulatory projections to the cortex - especially the prefrontal cortex. These arise from neurons in the monoaminergic cell groups, including the tubero-mammillary nucleus (TMN) containing histamine (His), the substantia nigra (A9) and the ventral tegmental area (A10) group containing dopamine (DA), the dorsal and median Raphe nuclei containing serotonin (5-HT), and the Locus coeruleus (LC) containing noradrenaline (NA). Dopamine is associated with motivation and fulfilment and can indirect dopamine effects from stimulants such as amphetamines can also cause sleep delay, serotonin with mood and survival oriented modulation and alleviating depression and inducing psychedelic states, nor-adrenaline with vigilance and histidine with wakefulness.

 

The dopamine, nor-epinephrine and serotonin ascending pathways spread across the cortex (a, d) with dopamine in particular concentrated in the prefrontal region.  They enter different layers of the cortex, as can be seen in (d) (NA and 5HT only) and in (c), where the positions of serotonin (1a) and (2a), nor-adrenaline (2a) and dopamine (D1) receptors are shown on different parts of the pyramidal neuron. This pattern is largely repeated across the entire micro-columnar structure of the cortex. As shown in (c), the serotonin receptors are served by ascending fibres from the Raphe nuclei, so the global mode of serotonin modulation, in the absence of psychoactive molecules, is driven from the basal brain. Orexin (ORX) neurons of the lateral hypothalamus, opposed by melanin concentrating hormone (MCH), act as a flip-flop between sleep and wakefulness, driven by ascending reticular pathways of aCH, his, 5HT and NA. The ORX innervate every component of the ascending reticular activating system and coordinate activity within the entire system. The failure of these leads to narcolepsy through loss of coordination of the sleep-wakefulness cycle.

 

Receptors for the very same molecules in the ARAS are pivotal in the span of key psychoactive drugs (fig 41), from antipsychotics, which are like an elephant sitting on the entire receptor keyboard, as broad-spectrum antagonists, with multiple negative and damaging side effects, including sometimes irreversible disabilities such as tardive dyskinesia and increased overall mortality, through tricyclic antihistamines and antidepressants which have similar indiscriminate activity, to more recent SSRI and SNRI antidepressants which are less harmful but still cause psychological dependence in long-term use. By contrast psychedelics are focused on serotonin super-agonism of the 5HT2a and 5HT2c receptors with mild agonism of dopamine and nor-adrenaline receptors adding rewarding but also a degree of anxiogenic features. The 5HT2a receptors are centrally involved in psychedelic states, while the 5HT2c receptors promote anxiety and relieve depression.

 

Psychedelic super-agonism is a paradoxical effect which sets off a different metabotropic pathway from serotonin itself, which modulates associated glutamate mGluR2 receptor expression, resulting in increased neurodynamic linkages between brain regions crossing sensory and other regions and increasing signal complexity and decoherence. MDMA has the effect of causing the serotonin (SERT) reuptake transporter to reverse, dumping large amounts of serotonin in the the synaptic cleft, resulting in its title “ecstasy”, but is also liable to cause oxidative damage because of its methylenedioxy­ molecular structure.

 

The most rostral neurons in the brain with a major role in sleep control are γ-aminobutyric acid cells located in the basal forebrain and in the anterior hypothalamus (GABA in a). These GABAergic cells are unique: while most neurons tend to have minimal activity during non–rapid eye movement (NREM) sleep, these cells are more active during NREM sleep than they are in rapid eye movement (REM) dreaming sleep or in waking. They also increase discharge rates with sleep onset and continue to release GABA at a high level while sleep continues. Serotonin, histamine, and norepinephrine cells are normally silent during REM sleep and active in waking. GABA applied to the serotonin and norepinephrine cell groups triggers REM sleep.  Acetylcholine is at its strongest both during REM sleep and awake. 

The orexin neurons of the lateral hypothalamus innervate every component of the ascending reticular activating system and coordinate activity within the entire system. Their failure leads to narcolepsy through loss of coordination of the sleep-wakefulness cycle. Dopamine also appears to have involvement in the sleep wakefulness cycle.

 

Complementing the ascending cholinergic reticular activating system evoking the energetic resonances between the thalamic nuclei and the cerebral cortex we associate with active consciousness is the envelope of the cerebral cortex, which, with the exception of prefrontal areas involved in attention, is organised as an experiential filter on ascendent activity, in six cortical layers, distributed across the cortex in micro-columns, which despite the extreme variations of visual, auditory and somato-sensory processing, all have a common architecture and physiology. This cortical experiential filter under normal operation eliminates the subterranean level of processing to form a perceptual internal model of reality spanning all dimensions of subjective experience external and internal. This means that we don't see the physical world of wave particles directly as it is, but an inner representation of it, e.g. through three colour vision based on the versions of rhodopsin in the cones of the retina, augmented by higher level visual processing areas into veridical 3D vision, symphonic spectral audition, alluring olfaction of hundreds of receptor dimensions and subtle tactile through to emotional and internal organismic sensations. Because some areas of the cortex include information about self and other, these also include recursive experiences of autonomous agency over the world around us, distinct from consciousness itself. As well, partly through culture and language, we have developed higher processes of cognition, analytical thought and reason, extending from natural intuition of the environment around us and our relationships with other agents. This internal model of reality, although located biologically in the brain becomes the holy grail of subjective experience, as a kind of controlled hallucination, we call subjective conscious experience.

 

Fig 92e: The parallel development of the pallium, striatum and pallidum in vertebrates (Boraud, Leblois & Rougier 2018). In birds neuronal density in the pallium is twice is high as mammals and equivalent to primates (Olkowicz et al. (2016).

 

We have also noted the cyclic interaction of the cortex with the thalamus and basal ganglia (striatum, pallidum) in fig 83. This has been cited (Boraud, Leblois & Rougier 2018) as the basis for parallel evolutionary relationships running all the way from stem vertebrates such a the lamprey through to both birds and mammals as climax intelligence.

 

Barron & Klein (2016) in a carefully researched analysis, propose that insects, and by extension arthropods, have the capacity for subjective experience, the core feature of consciousness. In vertebrates the capacity for subjective experience is supported by integrated structures in the midbrain that create a neural simulation of the state of the mobile animal in space. This integrated, egocentric representation of the world from the animal’s perspective is sufficient for subjective experience. Structures in the insect brain perform analogous functions, implying the insect brain also supports the capacity for subjective experience. In both vertebrates and insects, this form of behavioral control system evolved as an efficient solution to basic problems of sensory reafference (resolving the confusing sensory input caused by self-motion) and true navigation. The brain structures that support subjective experience in vertebrates and insects are very different from each other, but in both cases, they are basal to each clade. The origins of subjective experience can thus be traced at least as far back as the Cambrian.

  

Fig 93: (1) Decision making involves an assessment of what is needed and where and how the needed resources can be obtained. Decision making can therefore be considered to involve three domains: internal motivations, target selection, and action selection. (2) Structures of the vertebrate midbrain (not to scale) supporting the behavioural “core control system.” The vertebrate midbrain supports an integrated multisensory model of the state of the animal in space, which supports effective decision making. (3) The structures of the insect brain create an integrated neural model of the state of the insect in space that is functionally analogous to that described for the vertebrate brain. (4) Corresponding regions of the basal ganglia (left) and insect central complex (right) and their associated regions are aligned, as are their relevant connections. Inhibitory pathways are shown in red, dopaminergic pathways in black, and other pathways (excitatory or modulatory) in blue.  (5) The mammalian striatum consists of two principal subunits. Striosomes within the matrix of the striatum [in mauve, purple] are associated with discrete volumes of the striatal matrix, called matrisomes [brown, yellow]. In mammals, striosomes receive inputs from the hippocampus and amygdala via the frontal cortex, among other brain regions, carrying information about internal physiological states (modulations, memories). Matrisomes are supplied by cortical representations of sensory space and sensory modalities. Interactions and associations among matrisomes, and between matrisomes and striosomes, are mediated by local interneurons (blue) that integrate and provide information to striatal afferents supplying direct and indirect GABAergic pathways (red arrows) to the globus pallidus and subthalamic nucleus. The insect fan-shaped body from Mantis religiosa, labeled with antisera against allatostatin (green) and b-tubulin (ochre)] consists of two principal subunits: tangential stratifications (schematized in mauve, purple) that intersect columnar modules (yellow, brown).

 

Strausfeld & Hirth (2013) have, in a more focussed study, also shown that the arthropod central complex and vertebrate basal ganglia derive from comparable embryonic basal forebrain lineages that are specified by an evolutionarily conserved genetic program leading to interconnected neuropils and nuclei that populate the midline of the forebrain-midbrain boundary region. In the substructures of both the central complex and basal ganglia, network connectivity and neuronal activity mediate control mechanisms in which inhibitory (GABAergic) and modulatory (dopaminergic) circuits facilitate the regulation and release of adaptive behaviors (Graybiel & Matsushima 2023). Both basal ganglia dysfunction in vertebrates and central complex dysfunction in arthropods result in behavioral defects, including motor abnormalities, impaired memory formation, attention deficits, affective disorders, and sleep disturbances. The observed multitude of similarities suggests deep homology of arthropod central complex and vertebrate basal ganglia circuitries underlying the selection and maintenance of behavioral actions.

 

An anatomical validation of the integral role of the brainstem in consciousness is provided by Bjorn Merker (2007):

 

The principal macrosystems of the vertebrate brain can be seen to form a centralized functional design in which an upper brain stem system organized for conscious function performs a penultimate step in action control. This upper brain stem system retained a key role throughout the evolutionary process by which an expanding forebrain – culminating in the cerebral cortex of mammals – came to serve as a medium for the elaboration of conscious contents. This highly conserved upper brainstem system, which extends from the roof of the midbrain to the basal diencephalon, integrates the massively parallel and distributed information capacity of the cerebral hemispheres into the limited-capacity, sequential mode of operation required for coherent behavior. It maintains special connective relations with cortical territories implicated in attentional and conscious functions, but is not rendered nonfunctional in the absence of cortical input. This helps explain the purposive, goal-directed behavior exhibited by mammals after experimental decortication, as well as the evidence that children born without a cortex are conscious. Taken together these circumstances suggest that brainstem mechanisms are integral to the constitution of the conscious state, and that an adequate account of neural mechanisms of conscious function cannot be confined to the thalamocortical complex alone.

 

Fig 93b: A man with almost no cortical tissue with hydrocephalus
who nevertheless was married with children and worked in the civil service (Feuillet et al. 2007).

 

Barron & Klein (2016) note:

 

There is now considerable evidence that, in humans, subjective experience can exist in the absence of self-reflexive consciousness, and that the two are supported by different neural structures. Midbrain structures, rather than cortex, seem to be especially important. Merker (2005, 2007), Parvisi and Damasio (2001), Damasio and Carvalho (2013), and Mashour and Alkire (2013) have argued that the integrated structures of the vertebrate midbrain are sufficient to support the capacity for subjective experience. Merker (2007) notes that subjective experience is remarkably sensitive to damage to midbrain structures. Conversely, there is evidence of preserved consciousness even in patients who lack a cortex (Merker 2005). Further, although cortical damage can have profound effects on the contents of consciousness, damage to any portion of the cortex alone can spare the basic capacity for subjective experience.

 

Merker suggests that an important function of the midbrain is to combine interoceptive (stimuli arising from within the body) and exteroceptive (stimuli external to the body) sensory information. Information on the environment, and the location and movement of the animal within it, is processed within the roof of the midbrain (the tectum, or colliculus in mammals). Information about homeostatic needs is processed within the floor of the midbrain (the hypothalamus and associated structures). Nuclei located between these poles integrate this information to produce a unified multimodal neural model of the spatial location of resources relative to the animal, which is coupled to, and weighted by, the extent to which different resources are needed by the animal. Vertebrates organize their behavior by reference to this integrated model of the environment rather than by reacting to independent sensory inputs.

 

The insects are an extremely diverse group, but all insect brains have a common anatomical plan. The nervous system contains an enlarged cephalic ganglion (a brain) specialized for sensory processing and integration. This is linked by paired ventral nerve cords to a series of smaller ganglia for the thoracic and abdominal body segments. The insect nervous system has frequently been stereotyped as decentralized, with the cephalic ganglion acting simply as a region of sensory input that triggers motor responses organized by the segmental ganglia. This interpretation is incorrect and outdated. The insect brain resolves action and target selection, processes sensory information, and clearly executes a command function over the behavioral system.

 

However they deny that more "primitive" organisms, show all the features they associate with organismic subjective experience, although this is purely on anatomical and behavioural grounds in a way which underestimates the capacity of even primitive hydra for 'innovative" coordinated adaptive behaviours (fig 256):

 

The cubozoan box jellyfish Tripedalia cystophora shows dynamic behavior that includes actively hunting its prey. It has a well-developed geosense and visual sense thanks to 24 lensed eyes distributed around the body, yet the nervous system is entirely decentralized. Sense organs independently modulate activity in regions of the sensory net and muscle walls to steer the animal. The outcome of local sensory input acting on local muscle activity is an adaptive change in the swimming direction and speed of the animal, but the simple behavioral control system is entirely decentralized. Although such systems manifest the appearance of adaptive and dynamic targeting, they are nothing more than simple decentralized stimulus–response systems. They could not reasonably be expected to support any form of subjective experience.

 

The nematode Caenorhabditis elegans possesses a famously simple and well-characterized centralized nervous system that can integrate input from an array of thermo-, mechano-, chemo-, and nociceptors, along with an interoceptive sense of time to organize directed movements. Sensory inputs are integrated at the level of an array of interneurons that compete for activation of the motor neurons. The simple nervous system of C. elegans can also support various forms of learning such as habituation and classical associative conditioning, enabling adaptive change in behavior with experience. Nematodes are thus able to integrate multiple forms of sensory input using a centralized nervous system, but it seems that their behavior is organized as responses either to the immediate sensory environment or to immediate interoceptive signals of physiological state. Action selection in nematodes is driven by shifts in global brain dynamics, and there is sufficient plasticity in the nematode nervous system for their responses to vary with system state. However, there is no evidence that nematodes can actively hunt for things beyond their immediate sensory environment.

 

Hopeful Monster 4: Is Consciousness just Free Energy on Markov Landscapes?

 

Solms and Friston (2018) have proposed a model of consciousness, based on previous ideas of the mid-brain as a basis for consciousness, in which the brain acts as a prediction machine endeavouring to minimise the free energy difference between the actual state of the world and its predicted state. Friston (2010) is the architect of a major theory of brain function based on these ideas. Higher-level areas of the nervous system (i.e., higher-order cortical structures) generate top-down synaptic predictionsaimed at matching the expected bottom- up synaptic activity at lower-level areas, all the way down to ‘input’ activity at sense organs. Top-down signals encode a kind of best guessabout the most likely (hidden) causes of bodily sensations.

 

The model is again based on abstract stochastic processes. A Markov blanket (Kirchov et al. 2018) defines the boundaries of a system (e.g. a cell or a multi-cellular organism) in a statistical sense in a way that can be used to define homeostatic and adaptive processes and can be recursive as in a multicellular organism. It is a statistical partitioning of a system into internal states and external states, where the blanket itself consists of the states that separate the two, constituting a statistical boundary that sets something apart from that which it is not. This shows that internal and external states are conditionally independent, as they can only influence one another via active and sensory states. The states that constitute the Markov blanket can be further partitioned into active and sensory states.

 

They use both subjectiveand objectiveto refer to observational perspectives, so subjective is not really subjective, but internal observation. The subjective perspective “upon” the organism realises the “being” of the organism which they call interoceptive. The objective perspective realises the “body” of the organism they call exteroceptive’. They take an admittedly metaphysical position that neither of these observable realisations can be explained away by the other, which is fine. In other words, data about an organism that is derived from both interoceptive and exteroceptive perspectives must be reducible to one and the same set of explanations. This places each in their own parallel causal train except that an assumption is made of an underlying unity from which these both derive:

 

The starting point of my argument raises an interesting philosophical question. If body and mind are two appearances (aspects) of the same underlying thing, then what stuff is the underlying thing made of? In other words, using the analogy of thunder and lightning, what is the metapsychological equivalent of electricity(i.e., the thing that gives rise to thunder and lightning, both)?

 

We come to the devastating abstract crunch – “Therefore, biological explanations (as opposed to descriptions) are best formulated in neither interoceptive nor exteroceptive phenomenal terms, but rather as abstractions.  This is converting the central complementarity of subjective consciousness and objective brain into other complementarities of a different sort interoceptive v exteroceptive observation or perception and ascending neural pathways v cortical connections, neither of which are consistent with the original and fundamental subject-object complementarity at the heart of cosmology.

 

Fig 94: Solms-Friston model and Markov blankets. Predictive coding formulates free energy or surprise in terms of precision weighted prediction errors. A prediction error (e) here is the difference between a sensation (φ) produced by some action (M) and the sensation predicted by a generative model ψ(Q). Here, Q stands for internal expectations about or representations of hidden external states and Ψ(Q) is the prediction of sensory inputs that would have been encountered given those external states, under the generative model. Under some simplifying assumptions, we can now associate free energy (F) with the amount of prediction error weighted by its precision (ω). Precision corresponds to the reliability, or inverse variance, of sensory fluctuations (in various modalities) and is an important aspect of inference; namely, the representation of uncertainty.

 

Their central claim is that their combined insights invoking this entirely abstract stochastic process yields a straightforward response to Chalmersquestionwhy is there something it is like to be an organism, for the organism, and how does this something-it-is-like-ness come about? “. These two insights are: (1) that the primary function of consciousness is not to register states of the external world but rather to register the internal states of the experiencing subject and (2) concerns minimal conditions – a fundamental property of living things (i.e., biological self-organising systems) is their tendency to resist the second law of thermodynamics and that this functional property

emerges naturally within any ergodic random dynamical system that possesses a Markov blanket.

 

The first is not based in philosophy but on anatomical and physiological evidence, which suggests that consciousness is “quintessentially” interoceptive. Their argument goes as follows: conscious qualia arise primarily not from exteroceptive perception (i.e., vision, hearing, somatic sensation, taste and smell), and still less from reflective awareness of such representations, but rather from the endogenous arousal processes that activate them.

Exteroceptive representations are intrinsically unconscious – they do not inherently possess something-it-is-like-ness. They only acquire conscious quality when they are, in Chalmers words, “entertainedby the subject; i.e., when they are selectively activated by a more fundamental form of consciousness. In short, mental images can only be experienced by a conscious subject and they are in fact states of the conscious subject. The arousal processes that produce what is conventionally called wakefulness, in our view, therefore, constitute the experiencing subject they are consciousness itself – explicitly the arousal functions of the centrencephalic structures that sustain wakefulness and behavioural responsivity which in turn supply the conscious character of some higher cortical functions. The latter perceptual and cognitive functions (which are otherwise typically unconscious) derive their consciousness absolutely from the centrencephalic region.

 

This is fine as a description of the relationship between ascending pathways such as the reticular activating system, and underscores the relationship between thalamic circuits as drivers of activity and cortical circuits as responsive constraints, however identifying consciousness itself with the ascending pathways is not accurate physiologically in terms of active CNS dynamics, as exemplified in the EEG, where we see cortical states active as a whole associated with conscious experiences, with the ascending pathways just providing as in their thermodynamic model a free-energy substrate.

 

In the Solms-Friston model, autonomous systems, including nervous systems are modelled in terms of predictive coding, which formulates free energy or surprise in terms of precision weighted prediction errors. Hey state specifically that the model although claiming to solve the hard problem is following the  “Helmholtz school of medicine, whose members swore an oath in 1842 to the effect that no forces other than the common physical chemical ones are at work in the organism. In the model, precision corresponds to the reliability, or inverse variance, of sensory fluctuations and is an important aspect of inference in the representation of uncertainty. Precision is the confidence placed in the (predicted) consequences of an action or in a source of sensory evidence. In the ideal adaptive state of the organism where negentropic demand is met by optimal predictions – Nirvana – there are no prediction errors and the expected free energy is absolutely minimised homeostasis with no uncertainty or entropy and infinite precision. They claim this scheme, with recurrent exchanges of (ascending) prediction errors and (descending) predictions closely resembles empirical message passing in cortical and subcortical hierarchies.  In this context, action reduces to proprioceptive (motor) and interoceptive (autonomic) reflexes that are driven by descending predictions from the brains (hierarchical) generative model. Precision controls the influence of prediction errors on action and perception.

 

They then note that physiologically, precision is usually associated with the postsynaptic gain of cortical neuronal populations reporting prediction errors, associated through free energy minimisation with selective arousal or attentional selection. They then claim it is precisely this neuromodulatory synaptic mechanism that is targeted by psychotropic and psychedelic drugs on the basis of Nour and Carhart-Harris (2017).

 

The picture is actually much more complicated. Both psychedelics and other agents, from dissociatives to stimulants such as amphetamines, have differing and varied affects on attention. While psychedelics are associated with both a drop in the default mode network and sensory overload from upwelling activity, this isn’t easily analysed as simply prediction errors, nor an overall change in thermodynamic free energy minimisation. It is also manifestly inconsistent to associate surprise and uncertainty only with its suppression. Cultural expressions from music to scientific discoveries are all intimately associated with both uncertainty and surprise.

 

The approach of minimising surprise, while it does tally with avoidance of primary existential threats is not solved by homeostasis, but by self organised criticality at the edge of chaos, and there is no empirical basis to define neural processes as stochastic Baysean networks per se. Hence like IIT, this model is analogical and not causal.

 

Solms (2019) makes his homeostatic direction explicitly clear, citing personal experience dealing with subjects who have severe hydrocephalus and little cortical tissue, although some with seemingly empty cortices have small regions of cortical tissue having far more intense activity than normal:

 

I first expressed the view in 1997 that the problem of consciousness will only be solved if we reduce its psychological and physiological manifestations to a single underlying abstraction. It took me many years to realize that this abstraction revolves around the dynamics of free energy and uncertainty. Free energy minimization is the basic function of homeostasis, a function that is performed by the same brainstem nuclei that I was led to infer – like others, on independent (clinico-anatomical) grounds – were centrally implicated in the generation of consciousness. In other words, the functions of homeostasis and consciousness are realized physiologically in the very same part of the brain. This insight led to the collaborative work that enabled Friston and me to expand the variational free energy formulation of the mechanism of homeostasis to explain the mainspring of consciousness itself.

 

This viewpoint focuses on feeling, which is then identified with consciousness as a whole:

 

The function of experience cannot be inferred from perception and memory, but it can be inferred from feeling. There is not necessarily something it is liketo perceive and to learn, but who ever heard of an unconscious feeling—a feeling that you cannot feel?

This opens up a discussion of the fact that the reticular activating system coupled with the limbic system which is para-cortical in curcuiting through the amygdala, hippocampus and cingulate is the seat of consciousness as volitional affect (emotion):

 

Consciousness persists in the absence of cerebral cortex, as does volitional behaviour. As Damasio and Carvalho (2013) put it:

 

Decorticated mammals exhibit a remarkable persistence of coherent, goal-oriented behavior that is consistent with feelings and consciousness. Consciousness is obliterated by focal lesions of the brainstem core – in a region conventionally described as the extended reticulothalamic activating system (ERTAS). … If core brainstem consciousness is the primary type, then consciousness is fundamentally affective. The arousal processes that produce what is conventionally called wakefulnessconstitute the experiencing subject. In other words, the experiencing subject is constituted by affect. … Although many cognitive scientists still must be weaned of the view that the cerebral cortex is the seat of consciousness the weight of evidence for the alternative view that the arousal processes generated in the upper brainstem and limbic system feel like something in and of themselves, is now overwhelming.

  

While these physiological details are important and correct, there are two critical flaws:

(1) Although the cortex may be electro-dynamically passive on its own and the mid-brain may have strategically excitable properties consistent with intentional awareness, to claim consciousness is only root brain stem afferent activation trivialises its nature and complexity, when all the elaborate details of the conscious experiences we have are clearly derived through the modulation of the cortex under the active excitation of the thalamo-cortical loop.

(2) David Chalmers’ philosophical description of subjectivity, as a fully conscious intact human would experience it “feel like something in themselves” is a misconstruction. Chalmers is carefully stating what is is like to actually experience consciousness subjectively, not what observation of afferent pathways is associated with, in terms of anatomical dissection of function.

 

This equating of feeling with consciousness runs into all sorts of problems by disabling some key aspects of conscious experience in favour of others, not just in waking life but also in alternative mental states. Someone driving a car may or may not be centred on their feelings some of the time, or be experiencing intense emotions likely to cause an accident, but for most people, driving is a conscious sensory-motor experience. One of the most outstanding features of psychedelic visions is kaleidoscopic imagery, which one both experiences as real veridical perceptions “out there” and a suppression of egotistical emotions leading to quiescent feelings amid overwhelming perceptual, sounds, scenes and geometrical patterns, which the person intimately experiences as consciousness expansion. The same thing with dreaming states which are often profoundly visual and in which emotions may reach crisis point in perceived existential crises, but in no way is feeling as such ‘felt’ to be the sine qua non of conscious experience. There is also a fundamental basis to the notion that all forms of perception both sensory and  somatosensory are part of the envelope of conscious experience as is volition and the perception of intent. To thus identify the raw free energy of reticular activation as consciousness itself is a sever mischaracterisation.

 

Solms notes that this view is not shared by a long history involving the NCC or neural correlate of consciousness:

 

This assignment that the NCC does not lie in the brain stem, continues to this day. Cricks closest collaborator, Christof Koch, says of the deep brainstem nuclei that they are enablers [of consciousness] but not content- providers”.

 

Markov blankets are then conflated with two central properties accompanying conscious volition – selfhood and intentionality:

 

Readers may have noticed already that the dynamics of a Markov blanket generate two fundamental properties of minds— namely (elemental forms of) selfhood and intentionality. It is true that these dynamics also generate elemental properties of bodies—namely an insulating membrane (the ectoderm of complex organisms, from which the neural plate derives) and adaptive behavior. This is a remarkable fact. It underpins dual-aspect monism.

 

One can understand that selfhood and intentionality are fundamental properties of all autonomous life forms from the first prokaryotes to Homo sapiens, but this doesn’t mean they constitute experiential conscious volition as we know it, or that the neural plate substrates of early development define consciousness although I have shon that serotonin does provide such aa role. However claiming that this stochastic description of (sensory) input and (motor) output solves the hard problem in terms of conscious volition is the most tissue thin analogy conceivable. The critical point remains that a pure abstract system is categorically inconsistent with actual subjectivity, just as objective physical processes are.

 

While the model, in viewing subjective conscious experiences as anticipatory top-down simulations attempting to minimise mismatch with bottom-up environmental boundary conditions, in a way which is consistent with the anticipatory role of consciousness in Symbiotic Existential Cosmology, and the notion of the brain as an environmental filter rather than a physically causal computer, the subjective aspect is reduced to abstract stochastic homeostasis.

 

Hopeful Monsters 5: Can Teleological Thermodynamics Solve the Hard Problem?

 

Terrence Deacon in “Incomplete Nature: How Mind Emerged from Matter” sets out a descriptive teleological thermodynamics, which is an extension of Ilya Prigogine's (1984) concept of far-from-equilibrium thermodynamics in a three-layered structure of homeo-, morpho- and teleo-dynamics. These three categories actually coincide with (1) inanimate matter, (2) far-from-equilibrium stability structures such as in chemical biogenesis, and (3) living organisms. We are already intimately familiar with each of these, so the description is simply a thermodynamic recasting, which is insightful, but not empirically demonstrated in any proof-of-principle examples.

 

We already know that biological systems consist of fractal layers of organisation arising from the symmetry-breaking of the quantum forces as a consequence of non-linear charge energetics to interactively produce: quarks, hadrons, atomic nuclei, atoms, molecules with increasingly fractal cooperative weak-bonding structures, supra-molecular complexes such as the ribosome, organelles such as the membrane and Golgi apparatus, cells, tissues, organs such as the brain organisms and the biosphere.

 

Teleodynamic work is the production of contragrade teleodynamic processes, that work in opposition to the usual orthograde direction, that in homeo-dynamic systems leads to increasing entropy at equilibrium. An orthograde teleodynamic processes is an end-directed process that will tend to occur spontaneously. By contrast, contragrade change is described as the natural consequence of one orthograde process influencing a different orthograde process — for example, via some intervening medium. This implies that in one sense, all change ultimately originates from spontaneous thermodynamic processes controlled passively by constraints.

 

Fig 95: Left Nested hierarchy of three emergent levels of dynamics and their emergence from subvenient physical processes. Right Three nested conceptions of information. Shannon information is the most minimal and the most basic. Referential information is emergent (e) from Shannon information, and significant—or useful—information is emergent from referential information.

 

We already know that biological systems consist of fractal layers of organisation arising from the symmetry-breaking of the quantum forces as a consequence of non-linear charge energetics to interactively produce: quarks, hadrons, atomic nuclei, atoms, molecules with increasingly fractal cooperative weak-bonding structures, supra-molecular complexes such as the ribosome, organelles such as the membrane and Golgi apparatus, cells, tissues, organs such as the brain organisms and the biosphere.

 

Teleodynamic work is the production of contragrade teleodynamic processes, that work in opposition to the usual orthograde direction, that in homeo-dynamic systems leads to increasing entropy at equilibrium. An orthograde teleodynamic processes is an end-directed process that will tend to occur spontaneously. By contrast, contragrade change is described as the natural consequence of one orthograde process influencing a different orthograde process — for example, via some intervening medium. This implies that in one sense, all change ultimately originates from spontaneous thermodynamic processes controlled passively by constraints.

 

Yes biogenesis and biological evolution is teleologically directed towards diversity and yes evolution is a process by which adventitious mutation is sequestered in the genome and becomes available as significantly useful information by natural and sexual selection. In this sense “Incomplete Nature” is a self-confessed description of biogenesis, evolution and the constraints on organismic development, rather than mind or consciousness and here it does have descriptive insightfulness. However it is subtly similar in its cognitive respect to Daniel Dennett's multiple evolutionary drafts model now finessed by Terrence into a more concordant and appealing wrapping. In this respect Bernard Baars' description of the Cartesian Theatre of working memory, is surely the most animistic description in neuroscience, has a more appealing rationale because it is so richly populated with conceptual actors having the personae of living agents.

 

Deacon then applies this directly to conscious intentional actions. For illustration, reading exemplifies the logic of teleodynamic work. A passive source of cognitive constraints is potentially provided by the words on a page. A literate person structures their sensory and cognitive habits to reorganise the neural activities constituting thinking. This enables them to do teleodynamic work to shift mental tendencies away from those that are spontaneous (such as daydreaming) to those that are constrained by the text:

 

Although teleodynamic processes are incredibly complex, and an explanation of the structure of teleodynamic work is by far the most elaborate—since it is constituted by special relationships between forms of morphodynamic work—it is also the most familiar. So it may be helpful to first consider the human side of teleodynamic work before delving into the underlying dynamical structure of this process. Teleodynamic work is what we must engage in when trying to make sense of an unclear explanation, or trying to produce an explanation that is unambiguous.

 

In cognitive terms, orthograde teleodynamic processes may be expressed as goal-directed innate adaptive behaviors, spontaneous emotional tendencies, learned unconscious patterns of behavior, stream-of-consciousness word associations, and so forth. In social terms, orthograde teleodynamic processes may be expressed as common cultural narratives for explaining events, habits of communication developed between different groups or classes of individuals, conventionalized patterns of exchange, and so on.

 

Here is where there is a literally scorpion-like sting in the tail of Terrence's very entertaining story round the camp fire, which is the very insight of the utility of the zeros that he astutely notes arise from the hard problem of consciousness and other manifestations of subjectivity, which are rather like Carlos Casteneda's 'holes between the sounds" in "The Teachings of Don Juan", in that they don't appear to arise from the reductionistic physical description.

 

This revolves around his notion of entention:

 

I propose that we use the term ententional as a generic adjective to describe all phenomena that are intrinsically incomplete in the sense of being in relationship to, constituted by, or organized to achieve something non-intrinsic. By combining the prefix en- (for “in” or “within) with the adjectival form meaning something like inclined toward,I hope to signal this deep and typically ignored commonality that exists in all the various phenomena that include within them a fundamental relationship to something absent.

 

Terrence’s story starts out with great hope for conscious existence, invoking the possibility of causal openness:

 

"This opens the door to an emergent capacity to generate ever more complex, unprecedented forms of work, at progressively higher-order levels of dynamics, thereby introducing an essentially open-ended possibility of producing causal consequences that wouldnt tend to arise spontaneously. That is, we can begin to discern a basis for a form of causal openness in the universe."

 

However, by adding in his dynamic interaction between his teleological constraints and physical causality, he introduces a second level of objective causal closure defined by his thermodynamics. Notice that he admits this is a belief, not an empirical fact:

 

"By reframing the problem in these dynamical terms, I believe we will discover that rather than being the ultimate hard problemof philosophy and neuroscience, the subjective features of neural dynamics are the expected consequences of this emergent hierarchy. The so-called mystery of consciousness may thus turn out to be a false dilemma, created by our failure to understand the causal efficacy of emergent constraints."

 

In his closing passages, again stating this is belief rather than an empirical fact, he attempts to nail the coffin of the zero or “absence” of the hard problem to it's ultimate RIP:

 

"I believe that human subjectivity has turned out not to be the ultimate hard problemof science. Or rather, it turns out to have been hard for unexpected reasons. It was not hard because we lacked sufficiently complex research instruments, nor because the details of the process were so many and so intricately entangled with one another that our analytic tools could not cope, nor because our brains were inadequate to the task for evolutionary reasons, nor even because the problem is inaccessible using the scientific method. It was hard because it was counterintuitive, and because we have stubbornly insisted on looking for it where it could not be, in the stuff of the world. When viewed through the perspective of the special circular logic of constraint generation that we have called teleodynamics, this problem simply dissolves."

 

He then plays to the darkly shaded tune of these absences, holes or zeros as you prefer:

 

"The subjectivity is not located in what is there, but emerges quite precisely from what is not there. Sentience is negatively embodiedin the constraints emerging from teleodynamic processes, irrespective of their physical embodiment, and therefore does not directly correlate with any of the material substrates constituting those processes. Intrinsically emergent constraints are neither material nor dynamical—they are something missing—and yet as we have seen, they are not mere descriptive attributions of material processes, either. The intentional properties that we attribute to conscious experience are generated by the emergence of these constraints—constraints that emerge from constraints, absences that arise from, and create, new absences. "

 

and in closing states full circle that we are back to a purely objective causality, lacking any need for subjective existence:

 

"But this negative existence, so to speak, of the conscious self doesnt mean that consciousness is in any way ineffable or non-empirical. Indeed, if the account given here is in any way correct, it suggests that consciousness may even be precisely quantifiable and comparable, for example, between states of awareness, between species, and even possibly in non-organic processes, as in social processes or in some future sentient artifact. This is because teleodynamic processes, which provide the locus for sentience in any of its forms, are precisely analyzable processes, with definite measurable properties, in whatever substrates they arise."

 

Here is where the Wikipedia editor’s comment was right:

 

The book expands upon the classical conceptions of work and information in order to give an account of ententionality that is consistent with eliminative materialism and yet does not seek to explain away or pass off as epiphenominal the non-physical properties of life.

 

The difficulty here is that it is both consistent with eliminative materialism and the only sense in which mind is now “not epiphenomenal” is that it has been completely explained away as simply as case of objective thermodynamic teleo-dynamics. This use of entention as a purely mechanical generalisation of intent opens up the floodgates both to any form of AI that adopts the raw form of teleo-dynamics and  directly to the dread of eliminative materialism, by supervening the entire scope of the subjective realm to annihilation in a thermodynamic teleology that in no way captures the true nature of diversity, surprise, creativity or insight, except in the evolutionary model of random accumulation of adventitious and hence "useful" teleological information. Furthermore no such purely thermodynamic reality can in any way manifest itself subjectively, so it is simply a model thought in the mind of the reader, not an actuality manifesting the subjectivity of every conscious living agent, although the teleological thermodynamics of life may be a valid description in statistical mechanics. Deacon’s description thus can be a partial comfort only to dedicated believers in pure materialism.

 

Hence it in no way solves the hard problem any more than any model of brain dynamics does by discovering processes which physically do approach the sorts of unstable sensitivity or other characteristics that do form an objective description confluent with subjectively conscious mental states. In this respect “Incomplete Nature” is simply addressing some of the easy problems around abstract functionality of brain states, in thermodynamic terms, not the hard problem itself.

 

Deacon has claimed that this teleology is so causally complete that it has automatically, in a purely descriptive account, rendered quantum reality irrelevant:

 

"It didnt even require us to invoke any superficially strange and poorly understood quantum effects in our macroscopic explanations in order to account for what prior physical intuition seemed unable to explain about meaning, purpose, or consciousness. ...  More important, the scale at which we do unambiguously recognize ententional properties is vastly larger than the scale of quantum events, and in between there are only thermodynamic and chemical processes.

 

But this claim is self-fulfilling, as it stands precisely alongside the obvious fractal structure of brain tissue that likewise runs dynamically in a hand-shaking interaction between global wave states, cellular action potentials, and ion channels at the quantum level, modulated by edge-of-chaos transitions, and stochastic resonance at the unstable global tipping points when make-or-break situations where survival decisions are made, amid wave coherence sampling which is itself homologous with quantum measurement in the uncertainty principle. This is where we come full circle and have to recognise that, while Terrence did claim that teleological thermodynamics overlays quantum reality, he has in no way established that it has overruled it. Hence we come back to square one. In the evolutionary model, each adventitious mutation is an example of a single unrepeated quantum instance. His very own analogy between evolution and neurodynamics implies that adventitious thought may arise likewise from a single quantum instability induced by an unstable neurodynamical tipping point and we know from recent research that the quantum world approaches the classical only under conditions of IID (independent and identically distributed measurements) as Gallego & Dakić (2021) have shown, which neither evolution nor neurodynamics conform to.

 

In his conclusion, Deacon sets out to claim this gives humanity hope of meaning in existence:

 

“Perhaps the most tragic feature of our age is that just when we have developed a truly universal perspective from which to appreciate the vastness of the cosmos, the causal complexity of material processes, and the chemical machinery of life, we have at the same time conceived the realm of value as radically alienated from this seemingly complete understanding of the fabric of existence. In the natural sciences there appears to be no place for right/wrong, meaningful/meaningless, beauty/ugliness, good/evil, love/hate, and so forth. The success of contemporary science appears to have dethroned the gods and left no foundation upon which unimpeachable values can rest. … As I lamented in the opening chapter of this book, the cost of obtaining this dominance over material nature has had repercussions worldwide. Indeed, I don't think that it is too crazy to imagine that the current crisis of faith and the rise in fundamentalism that seems to be gripping the modern world is in large part a reaction to the unignorable pragmatic success of a vision of reality that has no place for subjectivity or value. The specter of nihilism is, to many, more threatening than death. By rethinking the frame of the natural sciences in a way that has the metaphysical sophistication to integrate the realm of absential phenomena as we experience them, I believe that we can chart an alternative route out of the current existential crisis of the age—a route that neither requires believing in magic nor engaging in the subterfuge of ultimate self-doubt. … If quantum physicists can learn to become comfortable with the material causal consequences of the superposition of alternate, as-yet-unrealized states of matter, it shouldn't be too great a leap to begin to get comfortable with the superposition of the present and the absent in our functions, meanings, experiences, and values.”

 

Fig 96: Title image to “Incomplete Nature” – the complete ablation of the

subjectively conscious volitional mind in favour of thermodynamic abstraction.

My physically casual brain made me do it becomes teleological thermodynamics made me do it.

 

But what Deacon has actually done is to screw down the death grip of true subjectivity even tighter. We end up with the opposite – eliminative materialism – rejecting the notion of mind and consciousness altogether as an archaic misconception, as Ryle has done, becoming nothing but a gap in the description of reality itself, substituted for by a regime of objective thermodynamics complementing physical materialism.

 

In a recent development (Kourehpaz et al. 2022), the arrow of time and the Boltzman distribution associated with thermodynamic entropy, which doesn't exist in quantum systems, which are entirely time-reversible has been attributed to quantum chaos, thus meaning quantum reality and chaos are essential to the concept of thermodynamic time flow.

 

Quasi-Particle Materialism

 

An interesting example of applying quantum processes to teleodynamic ideas is that of Roman Poznanski and coworkers (Poznanski et al. 2019a, b). Here they invoke a quantum process involving a variety of speculative quasi-particle like phenomena.

 

To start, I’ll explain what I think is missing in Roman's pan-mat view of consciousness. Pan-mat is one of several theories of conscious brain function, in which a phenomenon, such as quasi-particles, that may well play a part in how the conscious brain functions, has been given a privileged status in a “hopeful monster” theory of consciousness that depends on physical materialism and seeks to eliminate obvious exit routes in physics.

 

Organismic tissues constitute a unique state of matter in a self-organised phase transition between metal and insulator that  creates a unique quantum-molecular paradigm. Figure 81, shows that the myoglobin molecule, displays a series of fractal and self-organised critical properties that all organic molecules in tissue possess, from the origin of life, in a new state of matter, stunningly more advanced than the solid state physics of silicon. We simply don't understand or respect how tissues work because we don't understand they are macroscopic quantum phenomena possessing latent attributes of consciousness. Notale (2014) & Turner & Nottale (2017), which the Pan-mat research cites show macro-quantum effects, via pseudo-quantum theories in which Planck’s constant can be rescaled. But this is simplifying what is really happening because we can't just jig Planck's constant, so what they are displaying are genuine macro-quantum features that are actually held together by molecular fractality, from the quantum level all the way to the brain and universe as shown in fig 1, so cell organelle and tissue structure is coordinate in a state of quantum fractal super self-organised criticality as a far from equilibrium self-organising system. This doesn’t immediately look like a quantum fractal because, in living systems, genes have encoded the proteins to adopt useful fractal forms at the edge of chaos that themselves constitute dynamical quantum reality at the edge of chaos, but looks a little like a biological machine functionally. At the centre of the cyclone is the conscious brain, which is the ultimate cosmological interactive consequence of symmetry-breaking of the forces of nature.

 

The only meaningful empirical conclusion that can be drawn about the relation between subjective consciousness and physical reality is that “subjective conscious volition has efficacy over the physical universe” although this is an undecidable proposition, in the Godel sense, from the physical viewpoint, despite being certain from the Cartesian viewpoint: “cogito ergo sum”. Symbiotic Existential Cosmology sees this as an additional cosmological axiom of primal subjectivity at the quantum level which becomes fully manifest in eucaryote cellular and organismic consciousness.

 

The problem with Pan-mat is that it tries to limit all discussion to a classical viewpoint of physical materialism through an argument depending on analogy alone, stripping itself of the principles by which the hard problem can be solved: (1) It is empirically and theoretically impossible to demonstrate physical causal closure of the universe, or of the brain as a self-organising system, so founding a theory of consciousness on physical materialism is and remains unprovable, except as a kind of religious belief. (2) Roman and co-workers are using a social concept of multiscalar organisation which is inadequate to the task, rejecting three key features of how the living universe works: (a) quantum reality, (b) self-organised criticality and (c) the intrinsically fractal nature of the symmetry-broken standard model of physics. No theory which neglects all three can have any hope of a successful description of life or intelligence in the quantum universe. (3) The actual core tenets of the theory are not empirically verifiable or even manifestly plausible. (4) It is based on quasi-particles through a quantum analogy when good science doesn’t work on analogical assumptions.

  

As a field-like theory, the quasi-particle aspect has about the same strengths and weaknesses as other field-like  theories of consciousness discussed earlier, but its claims to pan-experiential materialism are more problematic:

 

Quantum chemistry underpins the so-called ‘quantum underground’ where consciousness originates. Due to inherent uncertainty in electron localization, the weakest type of Van der Waals force, the so-called London force, exhibit quantum effects and the resultant dipole oscillations (see Hameroff (2008) for a review). London forces are quantum level instantaneous, but weak, induced dipole-induced dipole couplings, due to charge separation resulting in attractive Coulomb forces occurring between nonpolar π-electron resonance clouds of two or more neutral atoms, molecules or macromolecules.  

 

We also assume that guidance waves and their resultant thermo-quantum fluctuations govern subtle internal energy leading to hierarchical thermodynamic transfer of information in the realm of preconscious. What is transferred from the macroquantum realm to the classical realm is information (theoretic) entropy as thermo-qubits containing information on the phase relations between molecular dipole-bound electron clouds of multiple systems.

 

Fig 97: Left:  (Top left) a neuronal branchlet studded with spines and (top right) shows a nonsynaptic spine shaft containing cytoskeletal molecular proteins bound to actin filament networks within the spine shaft of uniform length. The quantized subspace is where pilot-waves originate at picometer scale ( 0.1 nm) within apolar hydrophobic regions of actin-binding proteins shown (bottom left) form clathrate-like structures with water molecules. The electron clouds at 10 pico-meter scale (bottom right) is an order of magnitude smaller than most chemistry transformations and measurements and therefore below physical chemistry.  Right: There is a critical difference between identity theory (mind = brain) above and externalism (cosmic consciousness interacting with the brain) below. The brain only view if conscious space prevents any form of cosmic mind.

 

This process is illustrated in fig 97, where a similar quantum process to Hameroff and Penrose, using the London force is invoked on the dendritic web. Specific exceptional quantum processes are thus invoked which may not prove to exist. The “classical realm” is cited, as if it is a separate physical realm, when the entire process of dynamics is quantum non-IID.  These do not couple to subjective consciousness, because they are just forms of hypothetical “information” in the brain dynamics acting as a cypher for consciousness, which is then considered irrelevant, in a form of “eliminativism” in favour of thermo-quantum teleology.

 

The information we are discussing is intrinsic in the sense that it remains ‘hidden’ from the operational explanations of cognitive capacities. It is neither quantum information nor integrated information, but interconnected ‘intrinsic’ information associated with the internal thermo-quantum fluctuations that arise due to guidance waves transferring thermo-quantum internal energy by means of the macro-quantum potential.

 

This becomes clear in (Poznanski et al. 2019c) where, in stating “against externalism” in the title, the teleodynamic process identifies itself with brain-mind identity theory against any cosmic mind description, even if consistent with dynamics of the default mode, drawn together in the name of dialectical materialism:

 

The difficulty in locating the mental states which are attributed to brain activity is explored in this paper through nonreductive physicalism, which claims that all mental phenomena can be explained by the functioning of their neurobiological correlates through information. That is, higher level of brain functioning with different properties with respect to their constituents emerges and influences causally brain activity. This is not emerging reductionism-unlike in philosophy of mind, we do not subscribe to different ontology from DiaMat (dialectical materialism), which claims that consciousness is not an immaterial spirit. How this happens is the notion of teleofunctional components with intrinsic information of the material component that includes activity in brain structure. This differs from externalism as shown in the figure through proto-consciousness or cosmic consciousness interacting with brain’s neural dynamics.

 

We are reduced to physical materialism, so the purpose and utility of the thermo-quantum extension is moot. I also find the definitions and concepts to be metaphorical descriptions rather than a verifiable scientific theory:

 

It is usual to note that teleofunctions are distinct from the causal-role functions involved in functionalism. For example, with ordinary function the elements are irrelevant, while in teleofunction they are relevant because the function concerns what something is for and the notion of what something was selected for counts. This is a “teleological” notion of function, which can be minimally defined as the activity of the structure. Moreover, teleofunctions carry a value judgment or ‘meaning’ (unlike ordinary function, which carry only action or resultant effect). There is meaning attached to the interconnectedness of intrinsic information content of mental states and their interrelationship during the unification process. … The integration of brain functions results in a new category because of the functionally-linked continuum that is referred to as teleosemantic hierarchy of intrinsic content of brain states. Since the quantum realm is characterized by indeterminism, i.e., non-causal effects, this is incongruent to nonreductive physicalism where quantum mechanisms must be acausal. The unification of mental states associated with the brain’s hierarchical organization allows quantum-like causality to take on a subtle role in the brain leading to teleofunctionality. The unification of mental states arises from the teleosemantic hierarchy of interconnected intrinsic information content of mental states.

 

In Poznanski & Brändas (2020) the authors provide a sweeping philosophical justification to support their thesis, now described as pan-experiential materialism. The use of experiential is a misnomer because the authors state that panexperiential materialism claims that the mental–immaterial realm, just like the physical world, can be derived from matter waves, in an attempt to explain preconscious activity by an unproven claim of certain kinds of activity in the brain attribute "meaning" through thermo-teleodynamics. This forecloses on experiential consciousness, by claiming it is a distorted superstructural perspective on a deeper pre-conscious dynamic arising from the quantum thermal milieu and pure information gaining "meaning", using a purely objective physicalist description to use the brain's ability to use subconscious processing that may rise to conscious attention of it is coherently relevant, to devalue the core role of subjective consciousness. It ends up claiming to solve the hard problem in a sleight of hand of quasiparticle-enhanced pure materialism.

 

Poznanski et al. (2022) extend this description to give it more detailed theoretical support. I remain cautious to sceptical about the scientific verifiability of this overall construction. Many of the descriptions are speculative, unverified and circularly self-fulfilling. The authors cite Bohmian mechanics extended to molecules, but Bohmian mechanics is inconsistent with particle creation and annihilation which bond formation and breakage involves. I thus have issues about the verifiability of the sub-molecular thermo-teleoquantum description and its assuming of the role of 1pp experiencability, in a purely materialistic teleological form, which effectively denies subjective conscious volition over the physical universe and thus becomes a form of preconscious epiphenomenalism:

 

Experienceability is the capacity for an experience that occurs preconsciously and serves as an affective function that explains experience as a testament of consciousness. We base our model on the meaning of the feeling-as-information hypothesis suggesting that evanescent meaning arises from patterns of intrinsic information-carrying physical feelings. In other words, evanescent meaning as information gives form to feelings, meaning sentience as the capacity of feeling. Since the information is intrinsic, physical or raw feelings are hidden from phenomenal introspection. ... Preconscious experienceability is realized physical feelings with vast numbers of atomic microfeels formed as physical feelings (holons) in informational holarchies.

 

The notion of “microfeels” bears an interesting comparison with “raw feels” cited in the Stanford account of dualism: “At least since the time of Ryle’s Concept of Mind (1949), it has been assumed that thinking can be handled in a dispositionalist way; so only sensations or ‘raw feels’ constitute a problem for the physicalist”. This is then identified with π-orbital electrons, but all heterocyclic molecules, from nucleic acid bases, through  tryptophan to the porphyrin ring have delocalised electrons. They are integral to all biological processes. To identify experiencability with π-orbital electrons in a specicalised brain dynamic is facile and confining to other processes:

 

The feeling-as-information hypothesis can be applied to nonpolar hydrophobic regions, which are non- electrolytic regions of fatty acids and intersect with lipophilic membrane proteins' domains of mainly partially holistic amino acid units. Here holistic is defined as the molecular wave function expressed over many delocalized electrons where parts of molecules are partially holistic with a specific internal energy that keeps it together by canceling the classical potential energy and, as shown in long-range order in a thermalized milieu must partially keep π-electrons phase differences in the self-organized molecular orbital only in the absence of classical potential energy.

 

The same problem occurs with the discussion of quantum entanglement:

 

Our theory posits that nonlocal holonomy results from negentropic entanglement of nonintegrated information through the ‘negentropic force’. Therefore, there can be no "spooky action," and nonlocal holonomy in brains is through negentropic entanglement. The nonintegrated information juxtaposes several mechanisms on the quantum / classical regime boundary.

 

The explanatory gap between subjective consciousness and neurodynamics is now explained through classical information:

 

We have proposed a solution to close the explanatory gap. Classical information theory measures the decrease in uncertainty, for example, when consciousness ends, and memory begins. The spontaneous potentialities are environmentally influenced. Intrinsic information is Fisher’s information that relies on uncertainty and, at the fundamental level, comprises negentropic influences that carry molecular-embedded microfeels of meanings that comprise semantic information at a large scale. This latter process involves the capacity of negentropically interconnected partially holistic molecules to mutually affect, i.e., in-form, each other. This is the basis of negentropic entanglement. Although we propose a molecular approach, there is a problem with coupled process structures, in-forming each other on all levels of a neural organization. A dual-aspect theory of information, where one is classical and non-classical, is a wedge to close the exploratory gap between the mind and the body.

 

Quantum models of consciousness are then incorrectly discarded in favour of “evanescent molecular micro-feels”:

 

Finally, quantum models of consciousness are unrealistic from a biological perspective because the brain is an open quantum system. As Tegmark criticized the Orch OR model, the wave function has rapid decoherence. Our model is based on dissipative phenomena and does not suffer the same problems as other quantum models. No informational field continuously and permanently gives off the "radiance of consciousness". What we postulated are molecular-embedded microfeels that are evanescent. Therefore, by our definition, consciousness represents quanta of information (i.e., the temporal waveform in phase-space) that in each moment actualizes into conscious experience certain selections of the unconscious molecular embedded microfeels that, for long periods, resting in potentiality, constitute the preconscious experienceability.

 

Roman, in personal communication 5 Jul 2022 specifically concedes these quasi-particles cannot be empirically validated, but are nevertheless in reality subjectively first person conscious:

 

The quasiparticles in the brain are not capable of being observed.
The complex nature of these signals makes them a theoretical petri dish but they are 1pp in reality.

 

In 1 Mar 2023 in regard to (Alemdar et al. 2023), Roman states:

 

Consciousness is a quasi-polaritonic wave, see papers recently published.  

Quasi-polaritonic waves that are moving result in standing waves when consciousness ends and memory starts. 

 

Polaritons are hybrid quasiparticles resulting from strong coupling of electromagnetic waves with an electric or magnetic dipole-carrying excitation. Examples of such a dipole include an electron–hole pair in a semiconductor, which forms an exciton polariton, and the oscillating electrons at the surface of a metal, which creates a surface-plasmon polariton. Obviously we don't think a transistor is conscious, although it involves polaritonic waves. If we are dealing with the same type of quantum entity in the brain why is one conscious and the other not? This is the same type of critique as Merker et al. (2022) concerning IIT.

 

Roman: As Sperry (1970) suggested, consciousness must be investigated objectively and we have done this through a multiscalar lens (Alemdar et al. 2023). Starting with quasiparticles as the minimalist material composition of the dynamical brain where interferences patterns between incoherent waves of quasiparticles and their quantum-thermal fluctuations constrain the kinetic internal energy of endogenous molecules through informational channels of the negentropically-derived quantum potential. This indicates that brains are not multifractal involving avalanches but are multiscalar, suggesting that unlike the hologram, where the functional interactions occur in the spectral domain, the spatiotemporal binding is multiscalar because of self-referential amplification occurring via long-range correlative information.

 

Chris: Multiscalar is a ill-defined concept to do with scales of interaction in human societies (Olteanu et al. 2019). Neither is multifractal adequate, but denying quantum fractality outright is in empirical conflict with molecular biocosmology scale transitions  (See figure 51). You are welcome to confine your research to Sperry, but it is inevitably pure physical materialism which will never bridge the explanatory gap and you are simply conflating one supposed quantum feature of neurodynamics with consciousness arbitrarily because you believe in it, not because it is proven or provable.

 

You have just shown that your subjective conscious volition produced your reply, showing clear evidence you have subjective conscious volition you witnessed composing and dispatching your e-mail. That IS evidence! To deny it you either have to admit you are a zombie, who can't consciously affect the universe, or pronounce some epiphenomenalistic or fully materialistic jargon to deny the volition you know you exercised. We can all tell from the tenor of your reply that you consciously knew what you were doing even in the age of chatGPT.

 

Roman: Quantum consciousness remains unproven and does not fit into quantum biology, but multiscale neuroscience does.

 

Chris: Quantum consciousness is not in the same category as multiscale neuroscience, consciousness is subjective and neuroscience is physical. You can't validly refute one on the basis of assuming the other.  The statement that quantum consciousness doesn't fit into quantum biology is a contradiction, as consciousness does fit integrally into animal biology. Nothing in your theory is proven or provable either, since you admit that:  "The quasiparticles in the brain are not capable of being observed".

 

Roman: We have quantum analogs as classical properties of the quantum realm that fit onto a molecular scale of brain functionality. At this scale, phenomenal consciousness is a hybrid quasiparticle wave. It depends on its material property and represents a gateway to memory through the transfer of information, a derivative of energy transduction.

 

Chris: This is simply a statement of faith in pan-experiential materialism, which fails the hard problem extended to volition, as it effectively claims identity between polaritons and subjective consciousness. But this fails the same test as Merker et al. (2022) posed over ITT, since transistors and many other non-conscious systems utilise quasi-polaritonic waves but are not conscious. Why is phenomenal consciousness specifically and exclusively a hybrid quasi-particle wave at the molecular scale? Every molecule is a quantum entity, but it is only phenomenal consciousness if it also has a subjective phenomenology. You are in effect assuming one kind of quantum entity is a classical property and at the same time it is conscious by analogy.

 

Roman: It refutes the premise of dual aspect information since quantum information is not involved. I am not referring to weak dualism concerning the "outer core" of the quasi-polariton. This is because when we view through a multiscalar lens, information has various layers of the physical attribution/functionality that constitute the mind.  The outer coreis where ions collect, which is associated with cognition. The inner coreis the unique material composition of consciousness -the quasiparticles and their interference patterns. We must further assume that quasi-polaritons as quasiparticles are archetypal quantum analogs of conscious experience that naturally depend on cognitive dynamics attributed to the arrangement of ions encasing the polaritonic wave. They are subjective because they encase the cognitive aspects of organisms uniquely.

 

Chris: You are claiming a form of pan-experiential materialism which does not appear to link to consciousness except by analogy. The notion of an inner core of consciousness of quasi-particles and their interference patterns and an outer core containing ions as cognition is scientifically unverifiable. Therefore the theory is unscientific conflation and the copyright claim below has no conceptual basis in quantum uncertainty any different from any physical phenomenon such as a transistor.

  

Roman: What is consciousness? Many people of the old generation assume it is neural networks. The biggest change that we found is that consciousness is precognitive and the neural networks are entrained for consciousness to be expressed. Quantum mechanics is also irrelevant (see quantum statistical thermodynamics via Nelson 1966 reference). Consciousness does not have an electrical component and is therefore, not within the electro-ionic brain of the two-brains hypothesis but resides in the electromagnetic brain (Bercovich et al., 2017). Here, ions are not flowing as currents; the thermodynamic state creates the brain’s internal energy, which includes covalent energy between molecular bonds and dispersion energy due to van der Waals bonds. Consciousness is a process that carries an integrative function, arising when the brain’s internal energy is dynamically transformed (i.e., changes) by van der Waals bonds.

 

Chris: You seem to have an electromagnetic ghost lurking in the electrochemical machine. That is interesting and deserves critical attention.

 

Fig 98: Reconstructed pyramidal neuron from macaque monkey with the simulated electrical activity and corresponding magnetic field. The “two brain” theory postulates a second electromagnetic ghost in the electrochemical “machine” of the brain operating in complementary parallel.

 

Chris: Consciousness is the active process of attending to and engaging subjective experience either internally, or through the physical senses. Your definition of consciousness above is unnatural and your claim that "The act of understanding uncertainty is consciousnessis also unnatural as it derives form an abstruse definition of autopoesis – the central dogmaof information (rdenas-García, 2022). It has nothing whatever to do with quantum uncertainty  Your definition of pre-cognitive is ill-posed because precognitive dreaming for example is pre-experiential not pre-rational. Furthermore saying it is pre-cognitive is also ambiguous as to whether pre-cognition is experienced at all. The two brains hypothesis (Bercovich et al., 2017) is interesting but highly speculative.

 

Your published definition of consciousness (Alemdar et al. 2023) is:

 

The meaning essential for understanding uncertaintyas a definition of consciousness falls into the category of semantic information in the central dogma of information (Cárdenas-García, 2022). … The act of understanding uncertainty is consciousness. It does not need to make sense for "understanding" to take place. As the term "understanding" in the above definition of consciousness is precognitive, i.e., consciousness is intrinsic to affect but not to cognition. As per the central dogma of information, semantic information interactions with syntactical information is when conscious recall arises instead of memory, and only in the presence of uncertaintyis memory reconsolidated (Solms,2017).

 

The central dogma is purely about info-autopoesis which is teleological, but not an innovative process, which subjective consciousness is. Therefore your entire holonomic argument rests on autopoesis and multi-scalar system analysis, which in no way validly rebuts self-organised criticality or fractality. Your "anti-quantum" approach is cited as deriving from a Brownian motion classical interpretation of quantum mechanics (Nelson 1964) and the entire case is highly speculative. While I like the citation of an electromagnetic ghost in the electrochemical machine, I can still see no experimental evidence in vivo or in vitro to support this.

 

The abstract to The Central Dogma of Information (rdenas-García, 2022) states.

 

Info-autopoiesis or the self-referenced, recursive, interactive process of information self- production that engages all living beings in their efforts to satisfy their physiological and/or relational needs relies on Batesons difference which makes a difference. Living beings, as active manipulators/observers of their environment, derive meaning from the sensorially detected motion of matter and/or energy in the Universe. The process of info-autopoiesis in humans is found to be triadic in nature and incorporates the simultaneity of a quantitative/objective perspective with a qualitative/subjective perspective. In this process of meaningful engagement with the environment, humans create and transform endogenous semantic information into countless expressions of exogeneous syntactic information, which is synonymous with ordered material structure and artificial creation.

 

In my understanding, panexperiential materialism doesn't explain "what it is like" to be consciously aware. Understanding relationships in their wholeness is an objective construct to make an abstract objectively "out there" model of mentality as an autonomous "homeostatic/teleological" feedback system as a physically definable form of organismic agency in a purely biological sense. It is thus a functinally extended form of pure materialism. This is simply not what happens. Conscious experience is self-manifesting. It is irrefutably present to our existential awareness. Our experiential purview of reality is entirely derived though subjective conscious experience, qualitatively entire as a distinct category from physical processes. Subjectivity can only be experienced not observed. We experience only through our subjective consciousness and infer the physical world view from consensual conscious sensory experiences of the world at large and learn only indirectly from trial and error that we are biological organisms vulnerable to physical misadventure.

 

I provisionally accept that that living systems may exhibit a degree of short-term teleodynamics, particularly arising from their evolutionary context, but I see it as a limited necessary but insufficient condition for conscious existence. The authors may be speculating on plausible sub-molecular processes that underly normal brain functioning, consistent with an interface to, and potentially informative about, subjective consciousness, but whether these explain the hard problem or the existential nature of existence remains unestablished.

 

Strong Artificial Intelligence verses Understanding Natural Consciousness

 

We have seen that a number of neuroscientific theories of consciousness including ART, AST and IIT cite the possibility of their use in artificial neural networks supporting artificial intelligence. However some authors are interested in exploiting their discoveries for fame or profit. During the debate with Roman Poznanski on his theory of pan-experiential materialism, it emerged that he had trademarked a definition of consciousness with a view to it being used entrepreneurially in "Strong AI".

 

Roman: "The act of understanding uncertainty is consciousness(This definition is copyrighted®12/ 12/22). It does not need to make sense for "understanding" to take place. As the term "understanding" in the above definition of consciousness is self-referential and precognitive, i.e., consciousness is intrinsic to affect but not to cognition. Experimental findings from acknowledged work on savants indicate that the disintegration of spatiotemporal patterns of neuronal activity can bring about lower-level recall of vast raw less-processed information at the expense of cognitive functions, such as conceptual thinking. Consciousness is a savant-like process in all of us with a difference. Instead of raw sensory information, it is endogenously produced semantic information as raw physical feelings.

 

Brian Josephson:  Really?  5 Things You Can't Copyright That catchy slogan you came up with for your business? No dice on a trademark. The good news is that while not protected by copyright, if it pertains to your business (for example, goods and services), it can be protected with a trademark. Also exempt: Names, Titles, Short phrases or expressions, Product descriptions, Pseudonyms, Titles of works, Business names

 

Cathy Reason: However, I notice that he wrote ® rather than ©. So did he in fact register his phrase as a trademark rather than asserting copyright, which it seems would have been allowed, if no one else had done it before him?

 

Brian: This may help: The registered trademark symbol, ®, is a typographic symbol that provides notice that the preceding word or symbol is a trademark or service mark that has been registered with a national trademark office. A trademark is a symbol, word, or words legally registered or established by use as representing a company, product or service.

 

Chris: I “copyrighted” this idea as author under creative commons on 31/10/21:

 

Consequences: Affirmation of human subjectively conscious physical agency and legal responsibility. Occams razor is reversed, eliminating pure materialism, as inconsistent with conscious physical volition. Brain dynamics is not causally closed due to quantum uncertainty, enabling subjective consciousness to seamlessly participate in the uncertain instabilities of coherent processing without resulting in neurodynamic causal conflict.

 

Uncertainty and mind: The action of mind on brain necessarily arises from modulating the "random" aspect of quantum uncertainty in edge of chaos brain processing. This enables volitional will to intervene in the brain without disrupting the partial causal closure in brain processing in the uncertain quantum universe. In this sense, classical causality is replaced by quantum consciousness. It provides plenty of room to affect the computationally-intractable uncertain outcomes in evolutionary survival, using both subjective anticipation inherited from single celled eucaryotes a billion years before neural systems evolved and historical experience generated by cognitive processes.

 

Symbiotic Existential Cosmology is CC BY-NC-ND 4.0, which means it is free to use intact and cited, but not to use uncited pieces of. SEC lists on Research Gate as published June 2021. It obviously means consciousness is understanding uncertainty”.

 

Roman: "Consciousness as uncertainty" is not what I suggested to be copyrighted: "The act of understanding uncertainty is consciousness”.

 

Chris: Yes but its wrong as it stands. The act of understanding uncertainty is conscious, but its not consciousness.  We need to do more than understand it, we need to act upon it to survive, which is why volition is pivotal. Watching a red rose is an act of conscious certainty, that its not an alligator, or any other threat, except from its thorns.

 

I have a succinct cosmology whose thesis is centrally that subjective consciousness volition acts upon the physical universe exclusively through quantum uncertainty to avoid causal conflict as well as anticipating environmental threats rather than just understanding' them. Its an important key additional axiom to quantum cosmology (and neuroscience). I don't want to see spurious later copyrightclaims distracting from its originality.

 

Roman: You are right. It is not yet a trademark unless it is used in business for 3 years. Nevertheless, I put that in just to stop others from taking the sentence. Yes, Mark Solms did mention something partially in line with the sentence but I believe it was not the entire sentence verbatim.  This has nothing to do with quantum uncertainty. 

 

Chris:  All uncertainty arises from quantum uncertainty. It is an inescapable fundamental cosmological principle. All environmental uncertainty, as conceived of in neurodynamics, or the sort of uncertainty that Solms talks about in terms of strategic survival ultimately comes from contextual instabilities derived from quantum uncertainty.

 

The fundamental discovery in Symbiotic Existential Cosmology is the link between subjective conscious volition, quantum uncertainty and organismic survival, so neural processes are interactive manifestations of uncertainty as a cosmological principle. This means I have the copyright on this cosmological premise, not you Roman, nor Mark.

 

The final point is that it is impossible to eliminate quantum effects in the brain because the fractal non IID (independent, identically distributed quantum measurements) nature of neurodynamics means no convergence to the classical interpretation takes place because each event changes the context. Hence all the assumed non-quantum processes are quantum dynamics. So the notion of classicality in neurodynamics is invalid.

 

Conceiving the brain as a prediction machine cannot be done classically because the part that corresponds to conscious volition is collapsing the alternative Schrödinger cats of informational prediction, completing the prediction machine in free will. Consciousness corresponds to uncertainty because the computational aspects of prediction don't need to be conscious, but the uncertain aspects are uncomputable.

 

This doesn't mean we need dedicated quantum pathways like Stuart and Rogers model, nor do we need specific quasi-quantum classical pathways like Romans polaritonic waves. We just need to respect the broad brush strokes of empirical neuroscience with the added quantum indeterminacy, as in SEC because tissue is the most exotic form of quantum material in the universe, quite unlike solid state physics excitons and quasi-particles, which require low temperatures and by contrast can readily take place in the thermodynamic milieu at 25-37oC.

 

Alfredo Pereira: I am puzzled because almost everybody agrees that phenomenological consciousness involves three mental functions: cognition, affect and action control. How to reduce consciousness to one cognitive concept, namely to reduction of uncertainty? The big Cartesian error remains, identifying consciousness with certainty. If we remain uncertain about something we cannot be conscious of it?

 

Chris: The problem with cognition, affect and action control is that it is conflating things which dont need to be subjectively conscious with those that do, so the picture gets muddied. We aren't identifying consciousness with certainty, because then the process is essentially over, whether it is “aha” or a decision-making process leading to action upon the physical universe. Processes like cognitive thought are easy to represent as physically-determined computational processes which can be expressed logically and so could be handled unconsciously and which we consciously perceive as a foregone conclusion. Affect is too vague because it is associated with ill-defined feelings which are limbic-system mammalian responses essential to survival, such as love, hate, disgust, contempt, and fear. Action control hints at volition, which is preceded by decision-making at a higher level, where the source becomes elusive.

 

A completely different way of dealing with the really creative questions, deeper in the intuitive process of subjective conscious decision-making is to view neurodynamics as a fully quantum process, in the most exotic quantum material in the universe, in which the wave aspects consist of parallel excitation modes representing the competing possibilities of response to environmental uncertainties. If there is an open and shut case on logical, or tactical grounds, this one will win out pretty much in the manner of Edelmans (1987) neural Darwinism or Dennett's (1991) despised multiple drafts. In terms of quantum determinism the non-conscious processes form overlapping wave functions, proceeding according to deterministic Schrödinger solutions, (von Neumann type 2 processes), but in situations where subjective consciousness becomes critical to make an intuitive decision, the brain dynamic becomes an unstable tipping point, in which system uncertainty becomes pivotal (represented in instability of global states which are in turn sensitive to fractal scales of instability to the molecular level. Subjective consciousness then intervenes causing an intuitive decision through a (type 1 von Neumann) process of wave function collapse.

 

From the inside, this feels like and is a choice of "free-will" aka subjective conscious volition over the physical universe. From the outside, this looks like collapse of an uncertain process to one of its eigenfunction states which then become apparent. There is a very deep mystery in this process because the physical process looks and remains uncertain and indeterminate, but from inside, in complete contradiction, it looks and feels like the exercise of intentional will determining future physical outcomes. So in a fundamental way it is like a Schrödinger cat experiment in which the cat survives, more often than not i.e. we survive.

 

Roman: As for copyright, it can if used in entrepreneurship, like Strong AI development. Do you all know that the world now has nothing but weak AI? Google invested 500 million in DeepMind to squeeze sentience out of it. They will exploit it until they see that blood cannot be squeezed out of the rock … Yes, not copyrighted but trademark. TM. Copyright law applies after 15 or more words are copied..

 

Whit Blauvelt: Trademarks, with or without registration, require the mark be established in regular use in trade, in sale of goods or services, and not in conflict with any prior marks. Enforcement is a matter of civil action. Your odds of winning a trademark infringement suit based on some phrase written here are roughly zero/infinity.

 

Chris: You can't copyright "The act of understanding uncertainty is consciousnessas a process in Strong AI. Its not a product or process. None of the concepts are clearly defined. This would mean that every Turing comp that terminates is conscious, since it has understoodthe uncertainty of the invoked algorithm by solving it.

 

Roman: Our work on active consciousness suggests that consciousness is in the hand of man, not god, and therefore there is a way for the development of Strong AI. I am pleased with this progress. It will not only change the world but put DeepMind out of business. In the year that Strong AI stood up, I greatly appreciate that you don't send me emails to my other email or send me emails on metaphysics.

 

Chris: It's in the mind of man but should it be in the hand? The hand of God vs hand of man statement is thus of serious concern. People are beginning to get worried that you are trying to claim intellectual property rights over the very notion of consciousness, which we all possess, in the same way the US failed to ratify the 1992 Rio Biodiversity Convention and still hasn't, to claim ownership of the worlds genes. We already have too many neuroscientists like Graziano’s AST attempting to exploit recursive biological attention with artificial neural nets. It is really irresponsible for materialistic neuroscientists to engage in the mechanised degradation of the biological consciousness we all share, in favour of classical neural nets with or without transistors and other polaritonic components. Putting this in the context of a claim to somehow author Hard AI on a patentable basis is both unethical and pretentious given the IBM definition:

 

Strong AI aims to create intelligent machines that are indistinguishable from the human mind. But just like a child, the AI machine would have to learn through input and experiences, constantly progressing and advancing its abilities over time.

 

I don't see how this is remotely achievable without an organoid approach. The idea you can do better than the entire evolution of the human genome since life began, with some sort of artificial organoid is more than a stretch of credibility, which raises very serious ethical doubts you seem to disregard. And what a diabolical mission to even attempt. The literal Frankenstein of consciousness! In fact a cited co-worker of Poznanski has published on the use of sentient organoids made of a chimaera of a human organoid and bacterial and synthetic sensors fig 99b (Periera et al. 2023).

 

This morning, I noticed the Quantum Biology group had posted the link below to Smirnova et al. (2023) whose research team is headed by Thomas Hartung of Johns Hopkins, who is collaborating with the dishbrain team that taught an organoid to play pong (Kagan et al., 2022, Ledford 2022, Bowler 2022)  aiming at a comprehensive plan for organoid super-computing and Hard AI.

 

Fig 99: Game plan for ‘organoid intelligence’.

 

The beautiful and pioneering aspect of this work rests on equipping the neurons with sensations the feedback — and crucially the ability to act on their world,says one of the authors, Professor Karl Friston, a theoretical neuroscientist at University College London. Remarkably, the cultures learned how to make their world more predictable by acting upon it. This is remarkable because you cannot teach this kind of self-organisation; simply because unlike a pet these mini-brains have no sense of reward and punishment,he says.

 

"Let’s say upfront that this is not a brain. Not even close. The human brain has somewhere in the region of 86 billion neurons. Even a mouse has 70 million. The team suggest DishBrain has a similar number of neurons to a bumblebee”. "Even if DishBrain did have 86 billion neurons, we just dont know enough about the human brain to be able to create anything resembling a human brain".

 

The brain has a complexity that cannot currently be replicated in a dish. The shear number of cell types, the dynamics of the synaptic input they receive, and the neuromodulatory processes are not easy to replicate,explains Associate Professor Lucy Palmer, the head of the Neural Networks Laboratory at the Florey Institute.

 

This illustrates that DishBrain, was only the beginning and we are rapidly entering an era where strong AI, and my claim that realising his hypothesis would require an organoid approach was prescient (Brofiga & Massobri 2022).

 

“Nowadays, we are able to replicate many human neuronal types and peculiar brain regions in the form of engineered neuronal cultures, like neurospheroids or brain organoids, directly from embryonic and human induced pluripotent stem cells (hiPSC), and to couple them to a technological counterpart (i.e., chip).

 

"A full-sized human brain is not able to compute to the level of a supercomputer, but numbers-wise, they’re not far off. Compared to a supercomputer, our human brain has a similar speed – around 1 exaFLOPS. Memory and storage units are still better in the human brain than supercomputers. For all human brains’ flaws, they’re also incredibly compact and low energy compared to a supercomputer.”

 

Already public discussion has run with the question whether such organoids are conscious, and the issue of trade-marking a definition of consciousness is red hot, so we all need to take stock now before it’s too late to even comment.

 

Although the company calls its system DishBrain, the neurons are a far cry from an actual brain, Kagan says, and show no signs of consciousness. The definition of intelligence is also hotly debated; Kagan defines it as the ability to collate information and apply it in an adaptive behaviour in a given environment.

 

"The Australian company behind ‘Dishbrain’ (which learnt to play Pong last year) is collaborating with scientists at Johns Hopkins University in the US, whose research paper has outlined how these ‘biocomputers’ could allow us to understand memory, learning and other integral parts of human understanding. They also suggest it could rival supercomputers or AI. The crux of the paper, published in Frontiers in Science is a term called ‘organoid intelligence’ – this is the new field that will study small groups of human neurons that can learn, remember, and even understand its environment."

 

The Australian Dishbrain team also controversially used the word ‘sentience’ in their paper, and so started a long line of questions about what makes something sentient, what makes consciousness and even intelligence? But this new paper suggests this is only the start.

 

“We are talking at the moment really about the very basics,” says Hartung. “It will take certainly many years before we reach the intellectual capacity of even a small animal.”

 

Fig 99b: A genetically engineered bacterial network (colored red) connected around a brain organoid (colored blue; the intensity corresponds to hypothetical neuron firings) by means of a bio-sensor composed of a synthetic interface device to hypothetical neuron firings) by means of a bio-sensor composed of a synthetic interface device made of graphene (green) and golden quantum dots (yellow) to deliver natural bio-signals to the organoid. The bacteria have the role of transducing dynamic patterns from the Amazon Rainforest organoid (Periera et al. 2023).

 

Periera et al. (2023) outline the following approach:

 

We predict the emergence of conscious experiences in the first-person perspective of the brain organoid, motivating the system to compose creative works, recombining the dynamic patterns previously presented. … In this context, a scientific revival emerges, which seeks not a return to a primitive state, but a transcendence of the human-nature relationship mediated by the very intelligence of the machine.

 

Enable the use of the brain organoid in “post-humanist” projects for the preservation of sentience, both demonstrating the mentality intrinsic to nature and contributing to human “reconnection” with the information richness of nature; … In our foresight, Sentiomics has the potential to be the next step in a sequence of developments for the study and promotion of biodiversity and natural forms of mentality, moving forward in relation to successful projects of identification and preservation of genomes and proteomes.

 

Their approach is a two-phase one a little different from Poznanski’s hard core panexperiential materialism.

 

Ways of feeling are studied in two modalities:

(A) As the universal set of patterns of Sentience, which we call Sentiomics;

(B) As species-specific and individually different sets of qualitative subjective experiences,  which we call Qualiomics.

 

Qualiomics is, of course, a difficult issue for conventional science, as stated in the “hard problem of consciousness” , because it leads to the much-discussed distinction of first- and third-person perspectives.  There is an important difference between the capacity of feeling and emotion: while the first is a basic phenomenon possibly present in (almost) all living systems, the second is a higher-order phenomenon. The study of conscious emotions belongs to the domain of Qualiomics, a field of knowledge constructed with first-person perspective approaches. Sentiomics is more amenable to a scientific (empirical, experimental, and hetero- phenomenological) treatment than Qualiomics.

 

The problem solving dynamics in in vitro neuronal networks have been found to approach criticality in game-solving challenges similar to the dynamics attributed to conscious brain states however, by contrast in the ‘resting state it is subcritical, unlike conscious brains, where the resting state is accompanied by active rehearsing of impending crises. Habibollahi F et al. (2023) used an in vitro neural network of cortical neurons that was trained to play a simplified game of ‘Pong’ (Kagan et al. 2022) to demonstrate Synthetic Biological Intelligence (SBI). We demonstrate that critical dynamics emerge when neural networks receive task-related structured sensory input, reorganising the system to a near-critical state. Additionally, better task performance correlated with proximity to critical dynamics.

 

Fig 99c: Avalanches are utilised to examine the criticality metrics in the neuronal network’s activity patterns to identify the working regime of each recording in terms of being sub-, super-, or near-critical. The same measures of criticality are used to cluster the recordings between two groups of Gameplay and Rest. The recorded neuronal activities are then employed to extract the 3 metrics of criticality (namely Branching Ratio (BR), Deviation from Criticality Coefficient (DCC), and Shape Collapse error (SC error)) which are found to move towards the critical point during Gameplay and move further from that point during Rest (Habibollahi et al. 2023).

 

The human brain has taken 3 billion years to evolve. We are the climax of intelligent life in the universe as we know it. We are already being taken over by soft AI, with concern about the immanent tipping point caused by chatGPT. People are losing employment and a meaningful role in human society. We are trashing the biosphere from which we evolved with no real confidence of a living future. Is spawning a synthetic world made of our own brain cells the right course of action, when we still can't agree on what consciousness actually is?  I repeat my concern that this is a critical ethical issue we all need to understand and take a clear position on. Do we really want to embark on a Brave New World of "Strong AI", in which we are taken over by supercomputing human brain cell Frankensteins? What kind of Tragedy of the Commons (Hardin 1968) of consciousness is this potentially invoking? The use of human brain cells to make super computers and/or Hard AI needs to be comprehensively assessed beforehand for ethical use, just as human germ-line genetic modification needs to be. We need to make a determination in advance that if "organoid intelligence" is a feasible biotechnology, is it actually desirable and will it help protect the future of Homo sapiens?

 

Panpsychism and its Critics

 

This perspective naturally leads towards panpsychism, the idea that the fundamental constituents of the universe –i.e. quanta – have both a subjective existence and objective behaviour, just as they have both a wave and particle aspect physically. We can't see this subjective existence or "isness" directly, just as we have difficulty seeing one another's consciousness directly, so objective behaviour becomes the default core description. However we know that the quantum wave function shapes where each particle ends up in a way which remains unpredictable for a single quantum and only becomes determined in the average, in terms of the probability (amplitude squared) of the wave function.  This individual idiosyncrasy of a single quantum, when viewed as a particle within its wave function could be interpreted as its free will, as its location in the probability space modulated by the wave function amplitude is completely arbitrary and unbiased, just as it is able to have a determined position in the pilot wave theory if the Feynman implications of particle creation and annihilation are ignored. Likewise one could interpret its consciousness as its integrated “awareness” of the universal quantum entanglement through its wave function. One could thus conclude that the consciousness of the observer is apperceptive of the free-will of the quantum particle, since we are not asserting that the observer is applying their will to determine that Schrödinger's cat is alive or dead, but simply that our subjective consciousness is perceiving it to be in one of the two states. In either case we are dealing with what appears to be subjectively conscious observation of a quantum displaying psychic behaviour exerting an equivalent of volitional will masked by irreducible randomness.

 

Panpsychism doesn’t just apply to any physical object such as a spoon (Goff 2019, Seth 2021a), where there is no manifest form of active behaviour one can associate with the object. It can be associated with single quanta, where idiosyncratic quantum uncertain behaviour is manifest. Panpsychism might also be associated with other adventitious quantum events such as evolutionary mutation, and might also become manifest in edge of chaos quantum processes in the open environment where chaos can lead to further entanglements (Chaudhury et al. 2009, Steck 2009), which are not subject to the suppression of chaos noted in closed quantum systems. Evolution is particularly sensitive because adventitious mutations form a chain of idiosyncratic single collapses in sequence, in which no convergence to the probability interpretation actually takes place.

 

We have already seen in Chris Koch’s description of IIT his use and then rejection of panpsychism  as a basis to introduce integrated information systems as an alternative, however Symbiotic Existential Cosmology makes a similar analysis of diverse natural systems extending beyond those of living animals that is consistent with a unique form of panpsychism that does not have the problems cited by Chris Koch, or Anil Seth.

 

A succinct account of the emergence of subjective consciousness from quantum panpsychism, consists of individual quanta, edge-of-chaos processes with quantum sensitivity due to the butterfly effect, biogenesis, and prokaryotes, becoming emergent eucaryote cells and organisms. I shall associate consciousness as such only with a discrete transition to coherent excitability of single cells with the eucaryote endosymbiosis and the evolution of this into the coordinated excitability of organismic nervous systems, in a clear-cut biological model of subjective consciousness. This dispatches Seth's "combination problem" – how to combine small conscious entities such as quanta into larger ones, mischaracterised as a problem of panpsychism's own making, because the types of coordination are a product of physical neuro-dynamical processes acting as boundary conditions upon the complementary subjective conscious aspect.

 

Anil Seth, in “Being You” (2021b) provides a provocative account of the “exhilarating new science of consciousness”.

 

When asked if we will ever fully understand consciousness, and if we do what will that mean for our understanding of ourselves and our place in the world, he says:

 

 “It’s a very good question, but it’s a hypothetical situation. The reason I am hesitating is that some people who are new to the idea of scientifically explaining conscious feel threatened by it. … This attitude is especially true when you come to topics such as free will. People say “But no, I decided what I want to do”, thus claiming this is a residue of the age-old belief in human exceptionalism that we are at the centre of the universe and distinct from all the other creatures. “Having got rid of those exceptionalist ideas, I think that the picture of the universe is infinitely richer, more beautiful, more rewarding. ” (Dixon 2022).

 

However this is incorrect, as  Darwin’s view was that free will spanned the metazoans “down to the polype”. There is thus simply no connection between human exceptionalism and free will and it leads to an incorrect claim that a true understanding of consciousness suggests that free will is an illusion. He nevertheless has an insightful view of the evolutionary basis of consciousness within nature with which I agree:

 

We're going through this transition where we will begin to understand consciousness as part of the wider tapestry of nature. "Now that is threatening if you're still hanging on to your experience of being you as something apart from nature, separate from it. But I. think that's exactly the way Copernicus and Darwin were ultimately incredibly enriching. It will be and it already is incredibly enriching to understand consciousness within the wider patterns of the universe and the natural world (Dixon 2022).

 

But Copernicus is here conflated with Darwin when the extended view of subjective consciousness as a privileged view appears to be cosmologically accurate as a climax conscious phenomenon providing precisely this privileged view and what he is saying is that the analytical view of objective science has revealed nature’s true and confounding detail to the exclusion of subjective experience. While it may be true of religious cosmologies such as the Sabbatical Creation and Heaven and Hell, this view of the exclusive primacy of objective empiricism is fundamentally incorrect.

 

Seth cites Thomas Nagel as a basis for his naturalistic materialism, who in "What's it like to be a Bat" (1974) contended that while humans could never experience what a bat experiences, there would nevertheless be something it was like for the bat to be a bat, thus invoking subjective phenomenology as part of the discourse on consciousness.

 

However in this he cites the brain as a “complex prediction machine rather than a mere computer”. This is insightful because it recognises the key function of consciousness shared by all animals to predict existential threats and sources of opportunistic hunting, feeding and sex through environmental prediction which is an established neurophysiological fact. But it is still exploring animal conscious as an implicitly mechanistic phenomenon, which he extends to three key areas: Levels of consciousness, the content of consciousness and the self.

 

This then tallies with his research approach, to set aside the hard problem of why subjective consciousness exists at all, if a prediction machine can do it well or better, to the easier problems relating brain functionality to states of mind through experimental neuroscience. According to his “real problem of consciousness” the primary goals of consciousness science are to explain, predict and control the phenomenological properties of conscious experience. In short, addressing the real problem requires explaining why why a particular pattern of brain activity, or other physical process, maps to a particular kind of conscious experience, not merely establishing that it does.

 

This is something well established in neuroscience, as much of the research on psychedelic states reviewed in this work attests for one of the most complex and difficult of these states to assess. However, correspondence between brain states and conscious states do not explain whether the brain states cause the conscious states and in particular do not come anywhere close to empirically concluding that conscious volition, or free will, are merely subjective delusions of causal function of a prediction machine.

 

This approach leads to a series of mantras such as “I predict therefore I am”, implying that conscious mental states are just controlled hallucinations (Seth 2018) to predict circumstances and are thus not real and that the self is just a construct having no intrinsic of even volitional meaning or value. Yes we know conscious experience is also an internal model of reality constructed by the brain to make sense of the world, but it is although an evolved model, a superbly veridical model enhancing reality, which outside is an indecipherable flux of photons, atoms, electrons and other quanta having no phenomenal characteristics apart from mass, wavelength and/or position and energy and/or time.

 

These hallucinogenic conclusions simply don’t follow and stylistically devalue veridical experience and create a mystique of consciousness research as successfully unravelling the subjective foundations of our existential condition in favour of an occluded, albeit sophisticated mechanism. In the absence of solving the hard problem, this is a dangerous appeal to  promissory materialism which diminishes and invalidates the human experience of natural reality, we depend on to survive as a species.

 

Seth (2021a), in critiquing panpsychism, advances the case that the success of materialistic science is based on explanation, prediction, and control (EPC), the criteria by which many scientific enterprises are assessed, thus reducing biological 'vitalism' in a demystifying dissolution into molecular biology. Goff  has countered that some scientific advances such as Darwin's theory of evolution “emerged from a dramatic insight, rather than incremental dissolution”. But the objection to EPC is fundamental, because, at the very climax of biology, neuroscience has currently no idea of how to solve the hard problem or how the easy problems might be combined to evoke consciousness either. Goff argues that quantitative science does not capture qualitative properties characteristic of subjective qualia. The intrinsic difficulty with Seth’s "real" problem of consciousness – how to distinguish different types of qualia e.g. red and blue sneakers, is that it completely fails to address the root question of subjectivity, which is by nature entirely different from the localisable, analysable, distinguishable and separable properties of objective reality and arises in both quantum observation in physics and the hard problem in neuroscience in complementary ways. The cosmology deals with this by accepting root primitive subjectivity, then expanded into sentient attentive consciousness with the eucaryote endo-symbiosis, which then becomes a tightly-coupled society of neurons subjected to boundary filter conditions imposed by cerebral neurodynamics to provide adaptive context. Thus subjectivity is intrinsic, while the details of qualia are imposed by the boundary conditions described by empirical neuroscience.

 

But Seth’s final criticism is that “Worst of all for panpsychism is that it is not itself testable, and that it does not lead to testable predictions”. The problem is not about testability as such but how to make a test in a subjective regime that is by definition not objectively observable by others except by their demeanour and behaviour. This claim shows an inability to determine appropriate criteria for subjective testability. Legal decisions do not just depend on circumstantial (physical) evidence, but on sworn conscious testimony of a veridical nature. While this may be difficult for a single photon because it can only report from its behavioural trajectory, it is certainly possible and accepted scientifically at the high end of the scale in human subjective reports, each of which counts as a statistically verifiable data point. And by the mutual affirmation test invoked by the cosmology. However the details of just what the ultimate nature of conscious experience is in the cosmology of mental states illustrated in fig 107 is as yet not fully characterised, not least due to legislation against psychedelics.

 

This problem is significant. Albert (1992 82-3) in the context of quantum measurement, cites the objection to consciousness collapsing the wave function from imprecision about what consciousness actually is:How the physical state of a certain system evolves (on this proposal) depends on whether or not that system is conscious; and so in order to know precisely how things physically behave, we need to know precisely what is conscious and what isn’t.

 

We have also discovered that quantum entanglement between particles is both critical and universal to how the universe works.  In special relativistic quantum theories, wave functions are coupled in both directions in time, with advanced and retarded solutions providing handshaking between future absorbers and past emitters (King 1989). This is evidenced in the Wheeler delayed choice experiment, fig 74, confirmed by communication between satellites in Earth orbit (Vedovato et al. 2017). Multi-particle entanglement is just the tip of the iceberg, because even in a one quantum wave function, the particle can be detected only once in its wave function whether it occurs at earlier or later times, so collapse of the wave function has to occur simultaneously throughout past and future space-time.

 

In a trend that indicates just how inscrutable the “well” of quantum entanglement between two quantum systems can be, a paper on quantum complexity theory (Ji et al. 2020) shows that it is impossible to calculate the amount of correlation that two quantum systems can display across space when entangled (Castelvecchi 2020). The work concerns a game-theory problem, with a team of two players who are able to coordinate their actions through quantum entanglement, even though they are not allowed to talk to each other. This allows both players to ‘win’ much more often than they would without quantum entanglement. But the paper concludes that it is intrinsically impossible for the two players to calculate an optimal strategy. This implies that it is impossible to calculate how much coordination they could theoretically achieve. Thus there is no algorithm that is going to tell you what is the maximal violation you can get in quantum mechanics.

 

The Crack between Subjective Consciousness and Objective Brain Function

 

In this respect, it is pertinent to quote  Popper and Eccles (1984 96) coining of the phrase “promissory materialism”:

 

the new promissory materialism accepts that, at the present time, materialism is not tenable. But it offers us the promise of a better world, a world in which mental terms will have disappeared from our language, and in which materialism will be victorious. The victory is to come about as follows. With the progress of brain research, the language of the physiologists is likely to penetrate more and more into ordinary language and to change our picture of the universe, including that of common sense. So we shall be talking less and less about experiences, perceptions, thoughts, beliefs, purposes and aims; and more and more about brain processes, about dispositions to behave, and about overt behaviour. In this way, mentalist language will go out of fashion and be used only in historical reports, or metaphorically, or ironically. When this stage has been reached, mentalism will be stone dead, and the problem of mind and its relation to the body will have solved itself.

 

We thus take the obvious foundational realities of existence consciousness & volition upon which we depend for our sanity and survival and turn our empirical experience into a vacuum, ablated in the contrivance that a combination of biological constraints and mechanistic physical laws, which together can be the natural complement of existential consciousness, and instead unravel all the actuality of existence, as a descriptive illusion. We thus tell ourselves an arcane story that existence itself is a just thermodynamic constraint, neutralising our very agency to do anything meaningful, spontaneously imaginative, creatively transformative or merely good as in Bertrand Russell’s dire warning.

 

Conscious: Etym Latin conscius ‘knowing with others or in oneself(from conscire ‘be privy to’) + -ous

 

When we turn to the actual definition of consciousness e.g. in Merriam-Webster we find that essentially ALL the definitions of consciousness are dealing with subjective experience!

 

Definition of consciousness

1a : The quality or state of being aware especially of something within oneself.

1b : The state or fact of being conscious of an external object, state, or fact

1c : Awareness especially : concern for some social or political cause:

                  "The organization aims to raise the political consciousness of teenagers."

2 : The state of being characterized by sensation, emotion, volition, and thought : mind.

3 : The totality of conscious states of an individual.

4 : The normal state of conscious life.    “he regained consciousness”.

5 : The upper level of mental life of which the person is aware as contrasted with unconscious processes.

 

Francis Crick and Christof Koch acknowledge in Crick’s words that "Consciousness is the major unsolved problem in biology”, in his foreword to Koch’s (2004) "The Quest for Consciousness”. Koch (2018) in “What Is Consciousness?”, makes clear in his first sentence that: “Consciousness is everything you experience” thus acknowledging that it is the sum total of subjective experience. Koch makes clear in his discussion that their strategy is rather to define the NCC or neural correlates of consciousness, equivalent to the various easy functional problems of consciousness, deferring the hard problem of exactly what subjective consciousness is until these problems are solved, in the hope they will address the elephant in the room. But correlation is NOT causation, so an NCC doesn’t imply the brain is causally closed.

 

Chris Koch (2020) unveils another defence tactic in discussing the status of near death experiences, admitting physical materialism is just an assumption, but claiming it has a-priori evidential weight requiring “extraordinary, compelling objective evidence to the contrary to overrule it based purely on its past successes in science and technology:

 

“I accept the reality of these intensely felt experiences. They are as authentic as any other subjective feeling or perception. As a scientist, however, I operate under the hypothesis that all our thoughts, memories, percepts and experiences are an ineluctable consequence of the natural causal powers of our brain rather than of any supernatural ones. That premise has served science and its handmaiden, technology, extremely well over the past few centuries. Unless there is extraordinary, compelling, objective evidence to the contrary, I see no reason to abandon this assumption.

 

For all Chris’s charming romantic approach to reductionism (Koch 2012) this hypothesis underscores the dishonesty of neuroscientific materialism, that he should feel the need to adopt this position, because it places an illegitimate test on reality. We HAVE to assume causal closure of the universe in the context of brain function, because of the historical success of classical deterministic science in other simpler areas, or a completely unestablished admitted assumptionis arbitrarily declared to be true under an impossible burden of proof, to establish the contrary – a specific causal violation, which we know to be concealed in the edge of chaos dynamics correlated with the subjectively conscious condition.

 

How then does neuroscience turn the tables on this central signature of subjective existence, to claim it is exclusively a functional aspect of brain processing, however plausible it might seem, knowing we are biological beings with brains?

 

Gamez (2014) in “The measurement of consciousness: a framework for the scientific study of consciousness” makes this process clear by defining a set of interlocking definitions which a-priori define it to be so:

 

D1. A platinum standard system is a physical system that is assumed to be associated with consciousness some or all of the time.

A1. The normally functioning adult human brain is a platinum standard system.

A2. The consciousness associated with a platinum standard system nomologically supervenes on the platinum standard system. In our current universe physically identical platinum standard systems are associated with identical consciousness.  (X is said to supervene on Y if and only if some difference in Y is necessary for any difference in X to be possible. )

A3. During an experiment on the correlates of consciousness, the consciousness associated with a platinum standard system is functionally connected to its c-reports about consciousness. (subjective reports)

L1. There is a functional connection between consciousness and the [neural]correlates of consciousness.

 

This series of claims is simply defining consciousness to BE integrated brain function by philosophical supervenience, using the easy problems of consciousness based on simple functionality, as shown in fig 100, where the experimental subject has simply become their functional brain!!

 

Fig 100: The subjectively conscious individual is reduced to a set of functional interactions monitored by calibrated instruments (Gamez 2014), in which the case report “I am conscious of a red hat” is reduced to an objective sentence thereby side-stepping the entire subjective nature of consciousness, in this case a-priori, without even citing any kind of brain process to support it.

 

If the brain were simply controlling the process and consciousness was just a marionette being pulled by our brain strings we would experience this as being passive travellers in the passenger seat of intentionality. This is clearly NOT the case, so we need to distinguish brain influence e.g. as a boundary condition shaping, but not fully determining outcomes, from the brain determining conscious states entirely. We need to acknowledge subjective consciousness is the puppet master of edge-of-chaos instability and certainly not conclude that the brain drives the boat of subjective consciousness in a causally determined manner.  This is consistent with the view of the brain as a functional filter on consciousness that is participatory with subjective awareness in shaping the nature of conscious experience.

Note that, by citing psychedelics, I am also invoking a paradoxical objective biochemical role for inducing UNCONSTRAINED subjectivity, so this is a deep cosmological paradox we all need to take careful account of.

 

The Critically-Tuned Cortex at the Edge of Chaos

 

Associated with edge-of-chaos dynamics, the conscious brain is also cited as being in a state of self-organised criticality as being fine tuned in a phase transition between subcritical and supercritical states. This applies to many features, not just the neurodynamics, but to the network connectivity.  Given only weak connections, neurons typically activate independently of each other, and the pattern of network activity is stochastic. The number of neurons activated in the last layer will tend toward zero because weak connections dampen the spread of activity. But too strong a set of connections is also counterproductive. One active strongly connected neuron activates multiple other neurons, spreading activity until nearly all the neurons in the final layer are active. Activity gets through, but this saturation does not let you accurately guess whether I activated one neuron in the first layer or all of them. The amplification has washed away most of that information. In the intermediate “critical” case, avoids both pitfalls and the proportion of neurons activated is roughly preserved across layers.

 

Fig 99d: Abelian simulation of the fractal avalanches in a sand pile.

 

Self-organised criticality (SOC) is a property of dynamical systems that have a critical point as an attractor. Their macroscopic behavior thus displays the spatial or temporal scale-invariance characteristic of the critical point of a phase transition, but without the need to tune control parameters to a precise value, because the system, effectively, tunes itself as it evolves towards criticality. The concept was put forward by Bak, Tang & Wiesenfeld (1987), and is considered to be one of the mechanisms by which complexity arises in nature. Its concepts have been applied across fields as diverse as geophysics, physical cosmology, evolutionary biology and ecology, economics, quantum gravity, sociology, and neurobiology.

 

 

John Beggs (2023) summarises the situation:

 

Over the last few decades, an idea called the critical brain hypothesis has been helping neuroscientists understand how the human brain operates as an information-processing powerhouse. It posits that the brain is always teetering between two phases, or modes, of activity: a random phase, where it is mostly inactive, and an ordered phase, where it is overactive and on the verge of a seizure. The hypothesis predicts that between these phases, at a sweet spot known as the critical point, the brain has a perfect balance of variety and structure and can produce the most complex and information-rich activity patterns. This state allows the brain to optimize multiple information processing tasks, from carrying out computations to transmitting and storing information, all at the same time.

 

The same sense of a critical brain being “just right” also explains why other tasks should be optimized. For example, consider information storage, which is driven by the activation of groups of neurons called assemblies. In a subcritical network, the connections are so weak that very few neurons are coupled together, so only a few small assemblies can form. In a supercritical network, the connections are so strong that almost all neurons are coupled together, which allows only one large assembly. In a critical network, the connections are strong enough for many moderately sized groups of neurons to couple, yet weak enough to prevent them from all coalescing into one giant assembly. This balance leads to the largest number of stable assemblies, maximizing information storage.

 

Fig 1000b: Complex is critical.  Three snapshots of the spin configurations at one moment in time for three temperatures (subcritical, critical and supercritical) from numerical simulations of the two-dimensional Ising model. Only at the critical temperature do systems exhibiting a second-order phase transition show the highly heterogeneous correlated domains seen, whereas both sub- and supercritical conditions result in homogeneous states (Shriki et al. 2013).

 

And this is not just theory or simulation: Experiments both on isolated networks of neurons and in intact brains have upheld many of these predictions (Shew et al. 2009, 2011) . Further, we have seen these benefits arise across many different species, in turtles  (Shew et al. 2015) , cats (Hahn et al. 2010) and even humans (Shriki et al. 2013). Most of these studies have focused on the outer part of the brain, known as the cortex, although some have included subcortical regions as well. Overall, the studies have shown that these networks operate near the critical point.

 

Similarly, when humans are sleep deprived, their brains become supercritical, although a good night’s sleep can move them back toward the critical point. It thus appears that brains naturally incline themselves (Chialvo 2010) to operate near (Ma et al. 2019) the critical point (Beggs 2019,2023), perhaps just as the body keeps blood pressure, temperature and heart rate in a healthy range despite changes to the environment. 

 

Shriki et al. (2013) used MEG to model self organised critical cascades:

 

Our results demonstrate that normal cortical activity in healthy human subjects at rest organizes as neuronal avalanches and is well described by a critical branching process. Theory and experiment have shown that such critical, scale-free dynamics optimize information processing. Therefore, our findings imply that the human brain attains an optimal dynamical regime for information processing.

 

The edge of chaos is a transition space between order and disorder that is hypothesised to exist within a wide variety of systems. This transition zone is a region of bounded instability that engenders a constant dynamic interplay between order and disorder. We have seen two examples in neurodynamics, firstly in figure 35(rt) where edge of chaos is cited to have higher Lempel-Ziv complexity at the edge of chaos than the normal waking state or coma and seizure, which lie on the ordered and chaotic sides of the edge peak, and secondly in figure 78(1), where Freeman’s dynamics of olfaction involves a sniff resulting in high energy chaos whose energy falls on exhalation to either enter and existing attractor representing a known smell, or causes a bifurcation to form a new attractor. Here the edge of chaos is tuned in time, with the chaotic phase ensuring the dynamic doesn’t get stuck in a rut of order, followed by descent into  lower energy chaos strange attractor, representing the solution, in a similar manner to the thermodynamic annealing and slow cooling in Hopfield nets to get a sub-optimal stochastic solution to potential landscape minimisatons.

 

Devaney’s three Axioms of Chaos, which apply to both discrete and continuous non-linear dynamics state:

(1)   Butterfly effect: Sensitive dependence on initial conditions

(2)   Periodic points dense and hence repelling, and/or a single dense orbit.

(3)   Topological transitivity: Mixing of small open neighbourhoods.

 

Fig 99f: Examples of elementary 1-D 2-state CAs based on triplet of cells, include four types in Wolfram's classification: (1) Eventually fixed (136 = 10001000) (2) Periodic, often as a result of finite separated regions (73=01001001) (3) Chaotic with aperiodic patterns (30 = 00011110) and (4) Complex system potentially capable of universal computation (110 = 01101110).  Rule 90=01011010 also has root significance. In each time step all values are replaced by the exclusive or XOR of the two neighbouring values.

 

But there is a beautiful example in the most elementary discrete cellular automata of all – 1D 2 state 3 cell automata that Stephen Wolfram made the centrepiece of "A New Kind of Science". What is outstanding is that not only is rule 30 chaotic and has proved to satisfy all 3 axioms above, but rule 110 which is a Turing complete universal computer is also formally at the edge of chaos because it contains chaotic subsystems. It is impossible not to look at the 1-D cellular automata without realising that if universal computation arises at the edge of chaos and not in the regime of order there is no hope for any claim to an order based regime of ultimate good in a universe of intelligent systems. This doesn't mean life is bad or cursed by evil because we exist through this centre of the cyclone called the incarnate mortal coil, but since we can't take it with us when we go we need to respect the lives that follow in the best of all possible worlds called REALITY in the flesh. This is the ethic of collective immortality that brings us all together as one. The ethic of cosmology's chaotic edge.

 

Fig 99g: Left: Wolfram’s four classes of automata: Rule 136 is class I, rule 73 is II Rule 30 is III and is proven chaotic in the full sense of Devaney (Cattaneo, G et al. 2000). According to Devaney's three axioms of dynamic chaos, Rule 30 displays (1) sensitive dependence on initial conditions (two initial configurations that differ only in a small number of cells rapidly diverge), (2) its periodic configurations are dense in the space of all configurations, according to the Cantor topology on the space of configurations (there is a periodic configuration with any finite pattern of cells), and (3) it is topologically transitive, or mixing open sets (for any two finite patterns of cells, there is a configuration containing one pattern that eventually leads to a configuration containing the other pattern). Rule 110 IV is an edge-of-chaos universal computer i.e. Turing complete, as suggested by Stephen Wolfram and proved by Mathew Cook (2004). It also stands at the edge of chaos, in the sense that there exist subsystems on which this rule is chaotic in the sense of Devaney (Chen et al. 2012). Right: Toker’s (2022) classification of brain states – see fig 35 (right).

 

Also pivotal to the understanding of edge of chaos is the prisoners' dilemma, where defection is pitted against cooperation and in which the temptation to defect is at the source of all social and religious and moral notions of mutual double-jeopardy and evil. On the one side protest is essential in free societies to avoid tyranny including tyranny of the majority, but the other side is violent criminality, so there is no effective way to mount a universal ethic except at the edge of chaos.

 

A variety of complex and chaotic signals show the signature of 1/f noise. “Noise” here is regarded as any stochastic, chaotic, or complex signal that has varying power across frequency scales. White noise has the same intensity at all frequencies and each state is independent of the last. The integral of white noise becomes the drunkards walk of Brownian motion. Successive states are heavily dependent on the past states and distribute with frequency as 1/f 2. Pink, or 1/f noise, which is a half integral, is at the sweet balance between history and novelty. Music for example is regarded to commonly be 1/f  in both the frequency and amplitude domains. Related forms of brain analysis, such as multiscale entropy also to form a similar fractal measure of complexity in signal “noise”.

 

The correlation dimension, closely related to the fractal dimension on the multifractal spectrum (Schroeder 1992), also provides an estimate of fractal complexity of dynamical strange attractors and the 0-1 chaos test a view of chaotic and edge of chaos  neurodynamics (Toker et al. 2020).

 

Fig 99h: Correlation dimension of resting state eyes closed on caffeine for 20,000 samples over 20 secs. On the same data, the 0-1 chaos test gave a uniform cortical chaos level with mean 0.992 & sd=0.005. 

 

A Cosmological Comparison with Chalmers’ Conscious Mind

 

In espousing his philosophical view of naturalistic dualism David Chalmers’ central points in “The Conscious Mind” (1996) are as follows:

1. In our world, there are conscious experiences [which are irreducible to physical descriptions because subjectivity is categorically irreducible to any combination of functional inferences about the objective physical universe and/or the brain].

2. There is a logically possible [zombie] world physically identical to ours, in which the positive facts about consciousness in our world do not hold.

3. Therefore, facts about consciousness are further facts about our world, over and above the physical facts.

4. So materialism is false.

Rather than a philosophical view based on astute argument, I will take a complementary view of reality, embracing empirical observation for the objective physical aspect and empirical experience for the subjectively conscious volitional aspect. This means that empiricism carries direct evidential weight over logical discourse while preserving the empirical and theoretical basis of scientific inquiry and the veridical nature of existential experience.

 

In regard to the above, support 1, 4 and 3 (for other reasons), but remain unconvinced about 2. This is because “logically possible” is a philosophical conclusion that lacks an empirical basis in nature. Given a broad acceptance of 1 on the basis that the subjective phenomena are categorically different from any possible explanation in objective terms it remains unclear that a universe without conscious experience can become manifest as all our knowledge of the physical universe is gained through conscious experience of it. Nor is it empirically evident that such a “zombie” universe could display identical properties with living ecosystems if it did, since such a condition is unachievable.

 

David then takes a very cautious view, retreating to the very brink of materialism by asserting that consciousness is naturally supervenient to the physical, although not logically so, noting that this does not invoke Cartesian dualism:

 

So it remains plausible that consciousness supervenes naturally on the physical. It is this view — natural supervenience without logical supervenience — that I will develop.  … The arguments do not lead us to a dualism such as that of Descartes, with a separate realm of mental substance that exerts its own influence on physical processes.

 

David then effectively asserts, and later explicitly assumes causal closure of the physical universe as a justification, on grounds of personal opinion rather than empirical evidence:

 

The best evidence of contemporary science tells us that the physical world is more or less causally closed: for every physical event, there is a physical sufficient cause. If so, there is no room for a mental "ghost in the machine" to do any extra causal work. A small loophole may be opened by the existence of quantum indeterminacy, but I argue later that this probably cannot be exploited to yield a causal role for a nonphysical mind.

 

I reject this point of view, based on the fact that: (1) quantum reality  consists of causal process punctuated by quantum uncertainty and entanglement. (2) This is exacerbated by open system quantum chaos, inducing further entanglements because the kind of edge of chaos phase-coherence processing used by the brain becomes subject to butterfly effect sensitivity at tipping points in conscious processing where critical insights and decisions over uncertain outcomes are resolved. This has also invoked a reappraisal of the exclusiveness of sufficient, rather than final causes, because resolving quantum field problems, e.g. in the Feynman formalism involves implicit information from the future absorbers. Therefore the classical view of efficient causality central to the notion of classical causal closure remains unproven. To wager such a position in advance of scientific verification is a belief not a description of nature.

 

Chalmers describes his position as a form of property dualism:

 

The dualism implied here is instead a kind of property dualism: conscious experience involves properties of an individual that are not entailed by the physical properties of that individual, although they may depend lawfully on those properties. Consciousness is a feature of the world over and above the physical features of the world.

 

The position I am advancing, involving a complementarity between the subjective mind at large and the physical universe could also be described as a form of property dualism, but working in the scientific-existential paradigm, I describe it as asymmetric complementarity, following wave-particle, boson-fermion, and other biological complementarities, such as sperm and ovum. These are not considered to be “property dualistic”, as the complementarity is integral to the natural condition, or cosmological “design” as a whole in the case of cosmological symmetry-breaking.

 

For a design example, the four core quantum forces of nature display a particular type of broken symmetry (fig 71 lower left), which introduces a fractal design into the universe resulting in biological tissues and brains as climax structural outcomes, without assuming any form of teleology – theistic or anthropic.

 

Chalmers then advances the “plausibility” of consciousness nevertheless having an [entirely] physical basis, generated by contingent laws of nature such as the biological functionality of brain processing:

 

It remains plausible, however, that consciousness arises from a physical basis, even though it is not entailed by that basis. The position we are left with is that consciousness arises from a physical substrate in virtue of certain contingent laws of nature, which are not themselves implied by physical laws. This position is implicitly held by many people who think of themselves as materialists. … Some people will think that the view should count as a version of materialism rather than dualism, because it posits such a strong lawful dependence of the phenomenal facts on the physical facts, and because the physical domain remains autonomous.

 

I shall reject this view both on multiple grounds: (1) It remains unestablished that  quantum cosmology is physically autonomous as a whole or that the universe can become manifest without conscious observers. (2) Volitional autonomy is evidential to the conscious subject but no causal physical process such as a machine displays autonomy as such. (3) It results in a contradictory treatment of the subjective realm where Chalmers asserts that consciousness is irreducible but later, as we shall see, claims the phenomenal can be subtracted from volitional causality over the physical, when volition is manifest consciously as well as in behaviour and Chalmers’ arguments fractures the two, rendering the conscious awareness of volition to be a delusion and the physical manifestation in consciously motivated behaviour to have no causal basis. This is the classical materialist trap!

 

On the one hand we have the zombie establishing consciousness phenomena are categorically  independent of the physical and on the other we have a similar argument making them subtractable from the causal, thus invoking a philosophical Catch 22.

 

Chalmers then indicates he will explore new fundamental properties and laws of consciousness, detailed in a major section of his work. Symbiotic existential cosmology solves this problem differently by associating subjective phenomena as complementary manifestations of physically dynamic properties of brain processing rather than parallel laws in their own right. I reserve my position on this claim because it invokes a type of analysis successful in the exploration of the physical world, because fermionic matter is granular because of the Pauli exclusion principle, leading to the fractal material complexity of matter and hence biology. It is unclear such a “subjectively reductionistic” approach can be successful in the subjective realm, as subjectivity is not clearly subdividable in the same way, as is expressed as a noted feature of Eastern philosophy:

 

To bring consciousness within the scope of a fundamental theory, we need to introduce new fundamental properties and laws.

 

Chalmers cites physicist Steven Weinberg looking towards an explanatory chain from fundamental laws:

 

In his book Dreams of a Final Theory (1992), physicist Steven Weinberg notes that what makes a fundamental theory in physics special is that it leads to an explanatory chain all the way up, ultimately explaining everything. But he is forced to concede that such a theory may not explain consciousness. At best, he says, we can explain the "objective correlates" of consciousness. "That may not be an explanation of consciousness, but it will be pretty close"

 

Chalmers then cites two possible outcomes, citing proto-phenomenal properties in passing as a possible option:

 

There are two ways this might go. Perhaps we might take experience itself as a fundamental feature of the world, alongside space-time, spin, charge, and the like. That is, certain phenomenal properties will have to be taken as basic properties. Alternatively, perhaps there is some other class of novel fundamental properties from which phenomenal properties are derived. Previous arguments have shown that these cannot be physical properties, but perhaps they are nonphysical properties of a new variety, on which phenomenal properties are logically supervenient. We could call these properties protophenomenal Most of the time, however, I will speak as if the fundamental properties are themselves phenomenal.

 

This leads to seeking a parallel with the elegance of physical laws. I would question this approach, as the subjective is so fundamentally different from the objective that modelling subjective phenomena on the success of physical laws  invokes a subjective reductionism even if not through proto-phenomenal or proto-panpsychic means:

 

The case of physics tells us that fundamental laws are typically simple and elegant; we should expect the same of the fundamental laws in a theory of consciousness. … To capture the spirit of the view I advocate, I call it naturalistic dualism. It is naturalistic because it posits that everything is a consequence of a network of basic properties and laws, and because it is compatible with all the results of contemporary science.

 

He notes that this could rather be what one might rather call dual-aspect monism, citing matter and energy as alternatives, but not the complementary wave-particle aspects of quanta. This is problematic because matter and energy are not complementary but functionally equivalent, for example in terms of :

 

I should also note that although I call the view a variety of dualism, it is possible that it could turn out to be a kind of monism. Perhaps the physical and the phenomenal will turn out to be two different aspects of a single encompassing kind, in something like the way that matter and energy turn out to be two aspects of a single kind.

 

In dealing with possible objections to his approach, Chalmers cites emergence as a foil while still involving materialism, noting however that it has to exceed the usual limits, for example on biologically emergent properties. I agree with Chalmers on this:

 

Sometimes it is argued that consciousness might be an emergent property, in a sense that is still compatible with materialism. In recent work on complex systems and artificial life, it is often held that emergent properties are unpredictable from low-level properties, but that they are physical all the same. … If consciousness is an emergent property, it is emergent in a much stronger sense. There is a stronger notion of emergence, used by the British emergentists (e.g., Broad [1925]), according to which emergent properties are not even predictable from the entire ensemble of low-level physical facts. It is reasonable to say (as the British emergentists did) that conscious experience is emergent in this sense.

 

I agree with Chalmers rather than his description of Searle’s position although I admire Searle’s work:

 

Like me, Searle (1992) holds that consciousness is merely naturally supervenient on the physical. He allows that a zombie replica is logically possible, holding that consciousness is merely caused by states of the brain. But he denies that this is a variety of dualism, even property dualism. This might seem to be a mere terminological issue, but Searle insists that the ontological status of consciousness is the same as that of physical features such as liquidity, so the issue is not merely terminological.

 

The introduction to Searle's (1992) work notes:

 

Beginning with a spirited discussion of what's wrong with the philosophy of mind, Searle (1992) characterizes and refutes the philosophical tradition of materialism. But he does not embrace dualism. All these "isms" are mistaken, he insists. Once you start counting types of substance you are on the wrong track, whether you stop at one or two. In four chapters that constitute the heart of his argument, Searle elaborates a theory of consciousness and its relation to our overall scientific world view and to unconscious mental phenomena. He concludes with a criticism of cognitive science and a proposal for an approach to studying the mind that emphasizes the centrality of consciousness to any account of mental functioning.

 

We now come to the crux of Chalmers' problem – the relationship between subjective conscious experience, volitional will and causal efficacy over the physical world:

 

A problem with the view I have advocated is that if consciousness is merely naturally supervenient on the physical, then it seems to lack causal efficacy. The physical world is more or less causally closed, in that for any given physical event, it seems that there is a physical explanation (modulo a small amount of quantum indeterminacy). This implies that there is no room for a nonphysical consciousness to do any independent causal work. It seems to be a mere epiphenomenon, hanging off the engine of physical causation, but making no difference in the physical world.

 

Chalmers notes two responses to Thomas Huxley’s (1874) coining of the term epiphenomenalism, after observing frogs with cranial ablations still managing to jump out of a pool of water: Huxley (1874) advocated such a view, but many people find it counterintuitive and repugnant.

 

Chalmers’ two grounds – counterintuitive and repugnant are pejorative of conscious experience and fail to invoke the full scope of the grounds for the invalidation of epiphenomenalism. The fact that something is counterintuitive or repugnant obviously doesn’t mean it is not true. Newton’s laws of motion were first seen to be counterintuitive, but are true nevertheless in their domain of application. Likewise disease and death are repugnant but universal realities of existence. But that’s because incorrect criteria are being used. Neither of them carry the force of veridical affirmation from empirical experience of our volitional actions and decisions which are the “critical point”. Epiphenomenalism is invalidated by empirical experience because it fails the mutual affirmation test of subjectively conscious volitional agents. An argument, however astute, doesn’t carry the water of conviction against empirical observation, or experience.

 

It may seem “counterintuitive” or even “pretentiously naive” to trade off mutual affirmation between conscious biological organisms against the assumed physical causality of the entire universe, but this is a valid cosmological position, given the fact that, as far as we know, the physical manifestation of the universe can only be verified by conscious perception of its existence. The alternative to conscious volition is experiential and cosmological catatonia.

 

Chalmers acknowledges the mysterious nature of causation in rejecting claims of a full formal epiphenomenalism citing the possibility of a breakdown in our classical notions of causality:

 

In responding to this, I will pursue a two-pronged strategy. First, it is not obvious that mere natural supervenience must imply epiphenomenalism in the strongest sense. It is clear that the picture it produces looks something like epiphenomenalism. Nevertheless, the very nature of causation itself is quite mysterious, and it is possible that when causation is better understood we will be in a position to understand a subtle way in which conscious experience may be causally relevant. … On the second prong, I will consider the reasons why epiphenomenalism might be found unpalatable, and analyze their force as arguments.

 

In questioning causality Chalmers first cites Humean causation, upon which all it is for A to cause B is for there to be a uniform regularity between events of type A and events of type B, or a slightly more restrictive form in which any nomic (or lawful) connection suffices. Chalmers see these as inadequate and demurs that many conscious individuals will attribute such correlations to be causes when they may not be. He also fairly rejects overdetermination – the notion that both subjective and objective causes can come to bear in parallel on the same effect or behaviour.

 

Chalmers acknowledges that there are two classes of facts that do not supervene logically on particular physical facts: facts about consciousness and facts about causation and that these two may be linked:

 

A third strategy rests with the very nature of causation itself. We saw in Chapter 2 that there are two classes of facts that do not supervene logically on particular physical facts: facts about consciousness and facts about causation. It is natural to speculate that these two failures might be intimately related, and that consciousness and causation have some deep metaphysical tie.

 

A proposal like this has been developed by Rosenberg (1996), who argues that many of the problems of consciousness are precisely paralleled by problems about causation. He argues that because of these parallels, it may be that experience realizes causation, or some aspects of causation, in the actual world. On this view, causation needs to be realized by something in order to support its many properties, and experience is a natural candidate. If this is so, it may be that it is the very existence of experience that allows for causal relations to exist. Of course, this proposal is extremely speculative, and faces some problems. For a start, it seems to lead to a version of panpsychism the view that everything is conscious, which many find counterintuitive.

 

This is an extremely important point because the only evidence we have for classical causality is through our conscious experience of the universe in the affairs of the world around us. The laws of physics, both classical and quantum, contain no arrow of time upon which sufficient causes can be based and our only theoretical evidence for it comes from the stochastically driven second law of thermodynamics, with quantum entanglement having the spooky implications of retrodiction also imputing final causes. It is thus true (1) that the only way the physical universe actually becomes manifest is through our conscious experience of it and (2) that the laws of quantum mechanics lead to superimposed quantum states and the potential for Schrödinger cat paradox multiverses, which our conscious experience may play a key part in resolving. But as Chalmers points out this leads to panpsychism:

 

There is of course the threat of panpsychism. I am not sure that this is such a bad prospect — if phenomenal properties are fundamental, it is natural to suppose that they might be widespread — but it is not a necessary consequence.  … An alternative is that the relevant properties are protophenomenal properties. Either way, this sort of intimate link suggests a kind of causal role for the phenomenal.

 

Nevertheless he concedes that his view of natural supervenience feels epiphenomenalistic. However, he then mounts an attempt to marginalise the consequences:

 

Some people … may be tempted by an interactionist variety of dualism, in which experience fills causal gaps in physical processes. Giving in to this temptation raises more problems than it solves, however. For a start, it requires a hefty bet on the future of physics, one that does not currently seem at all promising; physical events seem inexorably to be explained in terms of other physical events. It also requires a large wager on the future of cognitive science, as it suggests that the usual kinds of physical/functional models will be insufficient to explain behavior. But the deepest problem is that this view may be no better at getting around the problems with epiphenomenalism than the view with causal closure, for reasons I will discuss shortly  [the assumed ability to subtract the phenomenal from the causal].

 

He then mounts a critique of the ability of the quantum universe to alter the classical causality of brain states, on two key fronts (1) quantum uncertainty and (2) collapse of the wave function:

 

(1) The only form of interactionist dualism that has seemed even remotely tenable in the contemporary picture is one that exploits certain properties of quantum mechanics. There are two ways this might go. First, some have appealed to the existence of quantum indeterminacy a nonphysical consciousness might be responsible for filling the resultant causal gaps, determining which values some physical magnitudes might take within an apparently "probabilistic" distribution (e.g., Eccles 1986). Although these decisions would have only a tiny proximate effect, perhaps nonlinear dynamics could amplify these tiny fluctuations into significant macroscopic effects on behavior. … This is an audacious and interesting suggestion, but it has a number of problems. First, the theory contradicts the quantum-mechanical postulate that these microscopic "decisions" are entirely random, and in principle it implies that there should be some detectable pattern to them—a testable hypothesis. Second, in order that this theory allows that consciousness does any interesting causal work, it needs to be the case that the behavior produced by these microscopic decisions is somehow different in kind than that produced by most other sets of decisions that might have been made by a purely random process.

(2) A second way in which quantum mechanics bears on the issue of causal closure lies with the fact that in some interpretations of the quantum formalism, consciousness itself plays a vital causal role, being required to bring about the so-called "collapse of the wave-function." This collapse is supposed to occur upon any act of measurement; and in one interpretation, the only way to distinguish a measurement from a non-measurement is via the presence of consciousness. This theory is certainly not universally accepted (for a start, it presupposes that consciousness is not itself physical, surely contrary to the views of most physicists), and I do not accept it myself, but in any case it seems that the kind of causal work consciousness performs here is quite different from the kind required for consciousness to play a role in directing behavior. It is unclear how a collapse in external perceived objects allows consciousness to affect physical processing within the brain; such theories are usually silent on what happens to the brain during collapse. And even if consciousness somehow manages to collapse the brain state, then all the above remarks about apparently random processes and their connection with behavior still apply.

 

Both these questions are  extensively addressed in this chapter of the monograph. There is no empirical evidence that brain processes are causally closed. Shepherd (2017) points out, that the neuroscientific threat to free will has not been causally established, particularly in the light of Schurger et al. (2012, 2015), also discussed herein. It is illegitimate to assume that any connectedness between subjective and objective in quantum uncertainty would result in gross or even detectable variations from pseudo-randomness, particularly if the relationship is one complementary to the physical universe as a whole. We already know that, in the absence of wave function collapse third party quanta do invoke compounded entanglements. Many such complex interactions, particularly integral transforms, involving a convolution integral of multiple components are likely to induce pseudo-random statistics rather than distortions of the gross statistics. Indeed two entangled particles are able to display correlations violating Bell's inequality while the statistics of each appears random to an observer only measuring one.

 

Chalmers then begins to explore the futility of invoking spooky quantum pseudo-particle states or subjective “psychons", as these don't in themselves demonstrate experiential properties:

 

Imagine (with Eccles) that "psychons" in the nonphysical mind push around physical processes in the brain, and that psychons are the seat of experience. We can tell a story about the causal relations between psychons and physical processes, and a story about the causal dynamics among psychons, without ever invoking the fact that psychons have phenomenal properties.

 

This brings us to the nub of Chalmers’ critique, with which I disagree on empirical grounds:

 

Any view that takes consciousness seriously will at least have to face up to a limited form of epiphenomenalism. The very fact that experience can be coherently subtracted from any causal account implies that experience is superfluous in the explanation of behavior, whether or not it has some subtle causal relevance.

 

I see this conclusion as the core of a dilemma all forms of philosophical causal reasoning apply to conscious volition in particular. It is evident that core physical theories defining the laws of nature, from Newton’s laws of motion to cosmological TOEs, or theories of everything, are not explicitly about causality, but the description of nature through symmetries, symmetry-breaking and equational relationships that successfully define characteristics of nature we can empirically observe and confirm, such as the doubling of the bending of light around the Sun due to the Suns gravitational field, confirming Einsteins theory of general relativity.

To make a claim on logical grounds that the subjective “phenomenal” aspect can be subtracted from the causal is not a valid comment about the status of subjective experience but the particular way the philosophical discourse is treating causality. Science is a product of theoretical predictions and confirming empirical observations. Neither is the theory a cause of the observations but a natural description of the circumstances predisposing to them. It is thus empirical observation that is the standard of validating natural science and it is the same standard of empirical experience that defines the natural investigation of the subjective domain. In this regard, the standard is and has to be veridical affirmation by empirical experience, not a logically astute argument to the contrary in defiance of subjective evidence.

 

Chalmers then repeats his mischaracterisation as the common objection:

 

The most common objection to epiphenomenalism is simply that it is counterintuitive or even "repugnant." Finding a conclusion counterintuitive or repugnant is not sufficient reason to reject the conclusion, however, especially if it is the conclusion of a strong argument.

 

In my view this is an incorrect portrayal of the central existentially experienced objection, which is that our conscious existential experience is centrally and unambiguously that of being an intentional agent acting in he physical world to further our physical survival and social success. We do this by a coherent integrated experience of responding to circumstances over which we have partial control, focusing our attention and volitional will of making decisions and carrying out ensuing physical actions with purposive intent, consciously aware of our intentions and the strength of our wilful determination, in exactly the same way we process and pay attention to our sensory experience.

 

We are thus aware and aware that we are aware, and aware that we intend and aware that we are aware that we intend and aware that we are aware that we act wilfully, intentionally and often decisively and tenaciously resist attempts by other agents and the vagaries of the world at large to impede our autonomy as conscious volitional living beings functioning as  physically causal agents. To characterise the inconsistency between this view of organismic conscious existential survival in the natural world as merely because the unverified claim of epiphenomenalism is counterintuitive on the one hand is to attribute it to a failure of astuteness and on the grounds of repugnance on the other hand to a failure of our wishful emotions to recognise the stark limitations of our sense of autonomous survival against the odds. This amounts to a philosophical misrepresentation of the realities.

 

Human perception is described as veridical because it evokes an experience of the world around us that is more true to reality than the incoming sensory information. Our perception of our volition is likewise veridical, to give us a truthful expression of the way our conscious attentive will is securing our survival in real time. To concede sensory perception is veridical and to deny it entirely for our perception of our volition is a contradiction – in Gilbert Ryle’s stark terminology – a category error. If epiphenomenalism were actually, in any qualitative, or even quantitative respect true, our veridical perception would inform us that we are mere passengers accompanying our action without any veridical sense of our volition.

 

Chalmers then goes into the counter-objections in detail:

 

More detailed objections to epiphenomenalism fall into three classes: those concerning the relationship of experience to ordinary behavior, those concerning the relationship of experience to judgments about experience, and those concerning the overall picture of the world that it gives rise to.

 

The first is an attempt at finesse. Chalmers demurs on veridical volition because he attributes it to mistaken regularity or an indirect nomic (lawful) connection, ultimately attempting to dispense with it as merely an intuition which cannot have the force of an astute philosophical argument:

 

We are much more directly aware of experience and of behavior than we are of an underlying brain state; upon exposure to systematic regularities between experience and behavior, it is natural that a strong causal connection should be inferred. Even if the connection were only an indirect nomic connection due to relations to the underlying brain state, we would still expect the inference to be made. So this intuition can be explained away. In any case, this sort of objection cannot be fatal to the view, as it is an intuition that does not extend directly into an argument. It is an instance of the merely counterintuitive.

 

The second however he concedes is both worrying and potentially fatal. My position is that it is manifestly fatal, because, Chalmers concedes it is incompatible with our knowledge of experience, as I have already discussed:

 

The second class of objections is more worrying. It seems very strange that our experiences should be irrelevant to the explanation of why we talk about experiences, for instance, or perhaps even to our internal judgments about experiences; this seems much stranger than the mere irrelevance of my pain to the explanation of my hand's withdrawal. … Some claim that this sort of problem is not merely counterintuitive but fatal. For example, it might be claimed that this is incompatible with our knowledge of experience, or with our ability to refer to experiences. I believe that when these arguments are spelled out they do not ultimately gain their conclusion, but these questions are certainly challenging.

 

As noted the basis of my objection is that Chalmers’ resort to the use of astute causal argument, while dismissing veridical awareness of volition in action as intuitive by comparison with robust philosophical argument is fatal because argument is a symbolic expression of a very narrow subset of subjective experience and can’t pretend to account for it as a whole. But the core objection is that this violates the principles of verification by empirical experience that are the foundation of the “scientific” exploration of the subjective.

 

Chalmers is  ever astute and acknowledges that some people, including myself will find his position to be a fatal flaw:

 

I do not describe my view as epiphenomenalism. The question of the causal relevance of experience remains open, and a more detailed theory of both causation and of experience will be required before the issue can be settled. But the view implies at least a weak form of epiphenomenalism.  Some will find that nevertheless the epiphenomenalist nature of this position is a fatal flaw. I have some sympathy with this position, which can be seen as an expression of the paradox of consciousness: when it comes to consciousness, it can seem that all the alternatives are bad. However, I think the problems with all other views are fatal in a much stronger way than the counterintuitiveness of this one.

 

Summarising his position he states his four key assumptions:

 

The argument for my view is an inference from roughly four premises:

1. Conscious experience exists.

2. Conscious experience is not logically supervenient on the physical.

3. If there are phenomena that are not logically supervenient on the physical facts, then materialism is false.

4. The physical domain is causally closed.

 

Chalmers finally states his naturalistic dualism succinctly:

 

Then there is my view, which accepts premises (1), (2), (3), and (4):  vii. Naturalistic dualism. Consciousness supervenes naturally on the physical, without supervening logically or "metaphysically."

 

My position is to deny (4) on the basis of the veridical nature of empirical experience, which is both inconsistent with causal closure of the physical and is the foundational principle of the pursuit of knowledge in the subjective, just as replication by empirical observation is pivotal to objective science. This is so because verification between subjectively conscious agents depends on mutual veridical affirmation of their common status as volitional conscious agents, which is what all sane human beings, not subverted by implicit belief in materialism assert, consistent, as previously noted, with conscious observation of the universe being necessary and integral to the ability to establish and hence manifest its existence.

 

To subsume veridical experience of volition to refutation by philosophical argument, on the basis that phenomena can be subtracted from causality and hence that volition can be discounted as merely “intuition” rather than empirical experience, is as fallacious as attempting to mount a philosophical argument that the the doubling of the bending of light around the Sun due to the Suns gravitational field does not mean that we should accept relativity because the general field equation  is simply a numerical expression describing a functional relationship and not a causal statement, especially having conceded that: “there are two classes of facts that do not supervene logically on particular physical facts:  facts about consciousness and facts about causation”.

 

Fig 101: One of Arthur Eddington's photographs of the 1919 solar eclipse experiment,
confirming relativity presented in his 1920 paper announcing its success
(Dyson, Eddington & Davidson 1920, Earman & Glymour 1980, Coles 2019).

 

And  he has one very astute final observation:

 

There is also an eighth common view, which is generally underspecified:  viii. Don't-have-a-clue materialism."I don't have a clue about consciousness. It seems utterly mysterious to me. But it must be physical, as materialism must be true." Such a view is held widely, but rarely in print (although see Fodor 1992).

 

Ultimately we come back to his persistent, and as detailed in my view, incorrect contention that the phenomenal component can be subtracted from the causal, when the issue is that volition is both experientially phenomenal and physically efficacious as we know experientially, and thus can’t validly be subtracted from the phenomenal aspect:

 

The deepest reason to reject options (iv) and (vi) is that they ultimately suffer from the same problem as a more standard physics: the phenomenal component can be coherently subtracted from the causal component.

 

It should be noted that Chalmers does seriously acknowledge the potential relevance of panpsychism as a possible solution, as I have:

 

Personally, I am much more confident of naturalistic dualism than I am of panpsychism. The latter issue seems to be very much open. But I hope to have said enough to show that we ought to take the possibility of some sort of panpsychism seriously; there seem to be no knockdown arguments against the view, and there are various positive reasons why one might embrace it.

 

In “The Meta Problem of Consciousness” Chalmers (2018) discusses the meta-problem of explaining why we think consciousness poses a hard problem, or in other terms, the problem of explaining why we think consciousness is hard to explain. IN this he addresses phenomenal reports: the things we say about consciousness (that is, about phenomenal consciousness). Problem reports are a fact of human behaviour. Because of this, the meta-problem of explaining them is strictly speaking one of the easy problems of consciousness. Chalmers contrasts illusionism: the view that consciousness is or involves a sort of introspective illusion, while realist think conscious experiences are real direct phenomena.  Chalmers notes that because illusionism is held by a minority, it makes sense to understand the problem as the meta-problem and focus on solving it.

 

This invokes a research program involving (i) experimental philosophy and psychology, linguistics, and anthropology studying subjectsjudgments about consciousness, (ii) work in psychology and neuroscience on the mechanisms that underlie our self-models and bring about problem reports and other phenomenal reports, (iii) work in artificial intelligence and computational cognitive science on computational models of phenomenal reports, yielding computational systems that produce reports like ours, and (iv) philosophical assessment of potential mechanisms, including how well they match up with and explain philosophical judgments about consciousness.

 

Chalmers is principally targeting a complementary problem to the hard problem which can help elucidate these dichotomies, but it applies more generally i the sense that it concedes the role of subjective reports and poses questions of how these can be rationalised in philosophy and particularly in neuroscience, where subjective experience and subjective reports tend to take second tier to hard physical data on brain states in so far as they can unambiguously be elucidated in conscious subjects.

 

Chalmers uses this approach to discuss theories of consciousness such as IIT Tononi & Koch (2015) that integrated information is the basis of consciousness, noting that there is no obvious link between integration of information and these judgments. Since, according to IIT, for every system with high integrated information there will be a computationally isomorphic simulated system with zero integrated information. He applies the same challenge to global workspace theories (Baars, 1997), where the basis of consciousness is a global workspace that makes information available to other systems in the brain. How does the global workspace help to explain our judgments about consciousness? Again, it is not obvious how the workspace explains problem reports involving a sense that consciousness is puzzling.

 

Higher-order thought theories (Rosenthal, 2002) say that conscious states are those that are the objects of higher-order thoughts. But again it is not clear how mere higher-order thoughts explain why we report mental states as being conscious nor how higher-order thoughts explain why we report conscious states as puzzling.

 

He notes that it can apply to quantum theories (Hameroff and Penrose, 1996; Stapp, 1993) that say there is a strong tie between wave-function collapse and consciousness. Does wave-function collapse play a central role in explaining reports of consciousness? One might worry that the answer is no, since wave-function collapse only selects one of multiple branches of the wave function. If a subject says I am consciousin the selected branch, it is arguable that the subject also says I am conscious’ in many unselected branches. If so it looks as if there may be an explanation of the reports which is prior to wave-function collapse.

 

The challenge also applies to panpsychist theories which hold that human consciousness is some sort of combination of micro-consciousnesses in fundamental entities. The combination problem for panpsychism is to explain how micro-consciousnesses can combine to yield our consciousness, now extended to explain how these combination states play a central role in bringing about reports of consciousness.

 

In considering introspective models which attribute primitive relations to qualities and contents, Chalmers, notes that introspection is especially central to Graziano's (2013) AST model in whichawareness is a model of attentionand doubts attention is the right choice for the complex relation that is being modelled, suggesting instead that it is more generally a model of representation.

 

Chalmers’ own view with which I have  complete agreement is best quoted for its sheer lucidity:

 

My own tentative view is that the most promising solution to the meta-problem lies in primitive relation attribution and the sense of acquaintance: our experiences seem to primitively acquaint us with qualities in the environment, and these experiences are themselves objects of acquaintance. I favour a realist theory of consciousness where consciousness does in fact involve acquaintance in this way. This line tends to suggest a combination of a first-order representational view of consciousness (consciousness involves immediate awareness of worldly properties) with a self-representational view of consciousness (consciousness involves immediate awareness of itself).  I do not think this sort of awareness is reducible to brain mechanisms, but one might expect some sort of corresponding structure at the level of brain mechanisms.

 

Uziel Awret (private communication) notes the need to distinguish the ground of subjectivity from the properties:

 

“The intransitive properties of consciousness those common to all conscious states, systems and creatures like some rudimentary 'there is something it is like' to be such systems, or to be in such states, and are usually referred to as 'phenomenal character' (some would add privacy and intentionality).  The transitive properties of consciousness are those that distinguish between different conscious states like blue and red or a square and a triangle and referred to as 'representational content’. Conscious mental states have both structural and non-structural properties including aspects of the representational content that are more structural lending themselves to scientific investigation and non-structural aspects of phenomenal character that seem less accessible to scientific investigation.

 

The question - What is it about consciousness that is made necessary by the way the brain is?  Should be broken in two:

1)     What is it about the structural properties of consciousness that are necessitated by the way the brain is?

2)     What is it about phenomenal character that is necessitated by the way the brain is?

 

In symbiotic existential cosmology I am focusing only on phenomenal properties as intractable to the hard problem, the Darwinian panpsychism likewise refers only to primitive subjectivity in general with an evolutionary model, where consciousness as we know it, is an emergent property induced by the eucaryote endosymbiosis when the membrane became freed for informational excitability and social signalling via neurotransmitters. The transitive structural properties have to be seen in the context of how the brain operates neurodynamically.

 

Consistent with his view in “The Conscious Mind”, Chalmers and McQueen (2021) have philosophically explored a variety of scenarios in which consciousness could collapse the wave function in realistic circumstances, dealing specifically with the paradoxes arising from superposition of the observer as a quantum system. They explore various options including super-selection rules forcing the elimination of some components of the superimposed state and super-resistance models in which a threshold causes collapse. Chalmers and McQueen adopt IIT as a basis for their analysis, but this introduces abstractions, in which consciousness is associated with a discrete Markov formulation consistent with observed features of conscious existence but not possessing subjectivity as such. This leads to a description where we are really analysing features of consciousness in objective brain dynamics rather than subjective experience to establish causality.

 

Summarising the difference between Chalmers & McQueen’s approach and Symbiotic Existential Cosmology, we compare four philosophical objections they cited and addressed:

 

(a) What is a superposed state of consciousness?  Chalmers & McQueen are stating a functional IIT model of "consciousness", so they state such a situation is possible, in conflict with our veridical experiences. The symbiotic cosmology concurs with the veridical conclusions of subjective conscious, and with Wigner's position that this is “absurd”, although it doesn’t rule out bodies and brains being quantum objects.

(b) How do quantum effects make a difference to macroscopic brain processes? Chalmers & McQueen do not assume quantum sensitivity in the "warm" brain, stating that "we have treated brain states as superpositions of numerous decoherent eigenstates, which themselves may involve relatively classical processing in neurons". Symbiotic cosmology accepts the need for brain states to have at least some quantum sensitivity and presents evidence for this. Critically it does not require the kind of isolation that current quantum computing methods do, by either isolating themselves from any significant decoherence, or by adiabatic quantum computing at very low temperatures following a series of zero energy configurations. All it requires in symbiotic cosmology are critically poised cellular states that become sensitive to individual quantum fluctuations in critically poised ion channels, initially in individual eucaryote single celled organisms. Later this process can become coupled in animal brains, through critically-poised whole brain states as coherent “excitons” distinguishable from one another through phase coherence discrimination being sensitive to threshold transitions in single neurons and their ion channels.

(c) What about macroscopic superpositions? Chalmers & McQueen hedge their bets, firstly suggesting machines might also be conscious: "For a start, if a correct theory of consciousness associates these devices with some amount of consciousness (as may be the case for IIT), then the devices will collapse wave functions much as humans do." Then following it with a catch-all: "Even if these devices are not conscious, it is likely that typical measuring devices will be entangled with humans and other conscious systems, so that they will typically be in a collapsed state too."
 

(d) What about the first appearance of consciousness in the universe? This is a problem for their particular models. They seek to solve this with an approximate super-resistance model: "For eons, the universe can persist in a wholly unconscious superposed state without any collapses. At some point, a physical correlate of consciousness may emerge in some branch of the wave function, yielding a superposition of consciousness and unconsciousness (or their physical correlates) with low amplitude for consciousness”.

 

The symbiotic cosmological model is panpsychic so the subjective element is included from the cosmic origin. Indeed it would then be possible for the universe to be involved in collapse of its own wave functions and develop a course of history, without human observers, which is a key strength of the theory, but in the case of the experimental quantum measurements of the types we are dealing with in the cat paradox, there is a specific interaction between human organismic consciousness and the experiment, so collapse could be evoked by the human observer's consciousness. This may apply more to (a) situations in how the brain performs its own phase front coherence processing between wave voltages and discrete action potentials and (b) in unstable tipping points in prisoners’ dilemma paradoxes in open environment situations, in which there are real or potential threats to survival, as in fig 91.

 

Penrose (2014) suggested a similar process involving gravitationally induced collapse, in which a quantum state remains in superposition until the difference of space-time curvature attains a significant level. However all quantum entanglement experiments on Earth take place successfully in an environment where gravitation is present.

 

Chalmers also notes that their general view might prove causal closure of consciousness: “The same might apply to the connection between consciousness and non-conscious processes in the brain: when superposed non-conscious processes threaten to affect consciousness, there will be some sort of selection. In this way, there is a causal role for consciousness in the physical world (Chalmers 2003, pp.262-3). This is very close to Stapp’s proposal above and the approach adopted in this cosmology, and to neuroscience notions of peripheral rather than coherent conscious processing in the brain, but it is being applied to collapse of the brain as a quantum superimposed state, not the subjective mind.

 

While this is provocative of an attempt to confirm a causal basis for volitional will, the difficulty here is that quantum observation depends on the subjective experience of the observer, not just integrated brain states we might accept as being the objective correlates of subjectivity, so the explanation of the theory is led into dealing with potential paradoxes of physical collapse that are tied to objective brain states rather than subjective experience, which is the veridical reality generating the unique history of the universe, rather than superimposed multiverses. In the author’s view Albert’s critique is pivotal – human society remains impeded from exploring the actual nature of unconstrained conscious states and only with the full exploration of these and collecting veridical accounts of visionary states can we begin to assess the nature and cosmological status of subjectivity.

 

To coin an analogy from the mathematical world, integral transforms such as the Fourier transform convert localisable time-amplitude information into frequency information, creating a mapping from all states into a complementary configuration space. If the subjective basis of experience is a transform of the entire physical universe under the  encapsulated constraints of the organismic brain, it may have a form of predictive power without possessing any localisable or separable features of the objective universe. Effectively it would be sampling the entire scope of quantum entanglement throughout the universe and throughout space-time, and through the consciousness of other sentient organisms, echoing Huxley’s notion of organismic consciousness being a filter upon the “mind at large”. Brain processing already appears to use transforms as integral to its wave processing, so the analogy is highly pertinent.

 

In later interviews, Chalmers (2022) has a much more positive view of the potentiality for subjective consciousness to causally interact upon the physical brain, through collapse of the wave function:

 

The next radical conclusion is that consciousness is separate from the physical world but it also has an effect on the physical world a causal impact. We need to be cautious here, but  science hasn't taught us that physics is a closed system that the brain is a closed system.. There's one route there that I think might be worth pursuing. A hole that's left open for where consciousness can have some impact. In quantum physics, most of the time, the wave function follows the Schrödinger wave equation, but every once in a while a strange thing happens – it collapses – why does the wave function collapse? Quantum mechanics doesn't tell us – it just says it collapses when a measurement occurs. What's a measurement? No one knows. On the face of it what looks like a measurement is a measurement by a conscious observer. If you wanted to have consciousness to effect physics, it looks like physics could not be designed more perfectly.

 

Moreover, in regard to the collapse of the wave function, the idiosyncrasy of single quantum instances displays unmitigated liberty, except in the context of repeated measurements of the same kind, in which the probability distribution is normalised by its asymptotic approach to the wave function real power φ.φ', in which the empirical wave function is an integral representation of entanglement at large. In the biological context no such repeated measurements occur so there is a close correspondence between quantum idiosyncrasy, the unpredictability of brain states at unstable tipping points and the uncertain and unpredictable nature of open environment survival crises. As non-IID (independent and identically distributed measurement) quantum processes do not necessarily converge to the classical, the need to prove the case for subjective interaction is no stronger than the need for materialism to prove its case for causal closure, which remains non-evident empirically.

 

Autopoiesis and Neurophenomenology

 

Autopoiesis and Robert Rosen's relational biology of (M,R) systems are both models that attempt to explain the behaviour of self-organising far-from equilibrium thermodynamic systems in the sense of Prigogine (1984). The term autopoiesis refers to a system capable of producing and maintaining itself by creating its own parts (Maturana & Varela 1991) to define the self-maintaining chemistry of living cells..

 

An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network.

 

The aim is to give a formal meaning to biological agency as self-organising system, but applied to the cell such self-organisation is largely homeostatic, inconsistent with the adventitious nature of mutational evolution and applying less to advanced organisms, where the hard problem of the explanatory gap and the creative demands of conscious intelligence require more than homeostatic prolongation of cellular life. It also places an emphasis on the cell boundary (Luisi 2003)as the critical partition defining agency , but this is not consistent with the fact (see fig 59) that archaea and bacteria have distinct cell wall components and likely separated before cell membranes became established in the primal progenote. Rosen (1993) was particularly concerned with cyclic causality that he saw distinguishing self-sustaining living systems from machines including Turing machines. Rosen also claimed there were formal causes in living systems lying beyond physical laws and asserted that cellular organisation must be independent from the material particles which seemingly constitute a living system. This is incorrect as memory has now been found to have a basis in permanent genetic changes (Jovasevic et al. 2024). He also incorrectly insisted that additional higher-level causes to genetic developmental determination must exist to explain protein folding which now appears to be directly due to a form of spatial quantum computation common to all molecular systems.

 

To address this, Francisco Varela’s (1996) neurophenomenology is an attempt to marry cognitive science with a disciplined approach to experience i.e. phenomenology (Husserl 1913) and thus consists of naturalising phenomenology, as integrated into an explanatory framework admitted by the natural sciences.

 

Pinholt (2012) notes that Varela’s article is written with David Chalmers’ ‘hard problem’ in mind, and functions as an alternative to Chalmers’ suggestions of what is needed to solve the hard problem, it is therefore also a step towards a possible solution to the ‘hard problem’. Varela agrees with Chalmers that experience is not a candidate for (reductive) eliminativism. But he disagrees with Chalmers that in order for us to solve the gap between cognitive science and experience we need to find some theoretical principles that will do the job. Varela believes that there could be a “neurophenomenological” circulation i.e. a co-determination of both accounts – a joint investigation of phenomena involving subjective experience, where the phenomenological analysis will be responsible for mapping out the structure of experience from a first-personal perspective, while cognitive science will be responsible for finding the neurological correlates for that experience, of course this approach pre-supposes that there are such things as neurological correlates.

 

A consequence of this approach is that both have equal status, that each constrain the other in which phenomenological analysis will provide a model or explanation of the structure of experience, while the neuroscientist will, guided by the phenomenological data, attempt to map out the neurocorrelates. According to Lutz and Thompson (2003) neurophenomenology is guided by the proposal that the best current candidate for the neurophysiological basis for consciousness is a "flexible repertoire of dynamic large-scale neural assemblies that transiently link multiple brain regions and areas."

 

Tim Bayne (2004) has made critiques of neurophenomenology where he remains skeptical towards claims put forward by neurophenomenologists, especially the claim that phenomenology provides a more reliable methodology than other first-person methodologies, including introspection. He reaches the conclusion that neurophenomenologists suggest that neurophenomenology involves more than just neuroscience and phenomenology but doubts the reality of this extension: “Exactly what the something more is, I think , an open question. Varela speaks of changing the “entire framework within which the [hard problem] is discussed”. The idea seems to be that neurophenomenology solves the hard problem by undermining the assumptions that generate it.”

 

The problem neurophenomenology shares with autopoiesis in my view is that phenomenology is treating subjective experience as an objective observable "out there". There is no clear interaction between subjective and objective realities that allows for conscious creativity, insight, innovation and intentional behaviour that characterises both the explanatory gap and the existential condition and our legal responsibility for our conscious actions.

 

Minimalist Physicalism and Scale Free Consciousness

 

Fields, Glazebrook & Levin (2021) take a very different approach from naturalistic dualism, they call Minimalist Physicalism MP, which bypasses classical physicalism and seeks to incorporate consciousness as a type of observer-world relationship based only on principles of quantum information that is claimed on empirical grounds to be scale-free and then regards basal systems which they describe as running all the way down the evolutionary tree not just to single-celled eucaryotes and Symbiotic Existential Cosmology does of consciousness proper, but to the first prokaryote and in principal to abstract quantum systems, thus equating with the primitive subjectivity of SEC.

 

Here, we provide a straightforward construction of fundamental, scale-free features of consciousness and cognition within a generic description of system-environment information exchange as bipartite physical interaction. We term this description minimal physicalism(MP) as it makes no assumptions about classical computational architecture, in particular, no assumptions about network architecture, and no physical assumptions beyond those of quantum information theory.

 

A well-established literature extends the concepts of consciousness — the capability of having phenomenal experiences, however basic or minimally structured—and cognition to phylogenetically basal systems, including free-living or facultatively communal unicells, whether pro- or eukaryotic, plants, and aneural or lower (than mammals, birds, or cephalopods) complexity neural metazoa, particularly flat- worms.

 

Like the extension of these concepts from humans to nonhuman mammals and then to big-brained non-mammals, this extension to more basal organisms was initially motivated by observations of communication, learning, and behavioral flexibility, and by functional similarities between the mechanisms supporting information processing and learning in basal systems and in more complex systems such as mammals. Both molecular and bioelectric mechanisms of cellular information processing, memory, communication, and error correction are, in particular, evolutionarily ancient and conserved across phylogeny.

 

Like the Solms-Friston model of the conscious brain, this utilises Markov blankets which from a statistical thermodynamic interface across the cell membrane:

 

As the locus of molecular, thermodynamic, and bioelectric exchange with the environment, the cell membrane implements a Markov Blanket (MB) that renders its interior conditionally independent of its exterior; this allows the cell to be described as a Bayesian active inference system. The utility of this Bayesian approach has been demonstrated in simulation models of cellcell communication driving morphoghenesis.

 

The information that transits the cell membrane, and is thereby encoded on the MB implemented by the membrane, is actionable or meaningful to the cell: it makes a differenceto what the cell does . When the cells interaction with its environment is represented as measurement, what renders the information meaningful becomes clear: meaning requires measurement with respect to some reference frame. Viewed abstractly, a reference frame is a value, or more generally a vector, from which deviation is detectable.

 

This notion of consciousness is an interactive “consciousness of” rather than subjective consciousness as a cosmological complement to the physical universe and in that sense claims to be able to pose the hard problem for example in single cellular prokaryotic systems where the feedback processes can all be identified. I have some reservations about whether this is actually testing the hard problem in its original sense or not. Prokaryote membranes are dominated by respiratory or photosynthetic free energy production, unlike eucaryote membranes which are available for perception and social signalling.

 

These cross-scale similarities motivate a hypothesis that consciousness and cognition are scale-free phenomena that characterize all living systems. If consciousness and cognition are scale-free phenomena, we can expect them to be supported by common, scalable mechanisms that can be investigated in whatever systems permit the most straightforward theoretical and experimental approaches. Phylogenetically basal organisms, in vitro preparations, and synthetic constructs provide obvious advantages of manipulability and environmental control. Studies of basal systems are, moreover, especially effective in overcoming the intuitions that give rise to the hard problem, as they allow the mechanisms via which single cells and relatively simple multicellular organisms navigate their environments— mechanisms that they share with most of our cells, and with us as organisms— to be investigated in detail.

 

I nevertheless think the approach is potentially powerful and deeply informative:

 

Our interest here has not been ontological, but rather empirical: to derive as much as possible from the simple assumption that consciousness involves information exchange subject to the constraints of quantum information theory. We have shown that the MP framework that follows from this assumption allows many of the key features of consciousness to be understood as simple, scale-independent consequences of thermodynamics.

 

It has led to  a description of neurons as hierarchies of quantum reference frames  (Fields, Glazebrook & Levin  2022).

 

And has also led to intriguing conclusions on the metabolic limits on classical information processing by biological cells, implying quantum processing in the cell interior (Fields & Levin 2021):

 

Biological information processing is generally assumed to be classical. Measured cellular energy budgets of both prokaryotes and eukaryotes, however, fall orders of magnitude short of the power required to maintain classical states of protein conformation and localization at the A, fs scales predicted by single-molecule decoherence calculations and assumed by classical molecular dynamics models. We suggest that decoherence is limited to the immediate surroundings of the cell membrane and of inter-compartmental boundaries within the cell, and that bulk cellular biochemistry implements quantum information processing.

 

Defence of the real world from the Case Against Reality

 

I have said that while consciousness is primary, the universe is necessary. Thus we know the universe only through our conscious experiences of it, but its stability and structure is necessary for the existence of conscious life.

 

Don Hoffman  in “The Case Against Reality” (2020) makes the evolutionary case that perception is not veridical in the sense of optimally truthful, but evolved by natural selection:

 

The classic argument is that those of our ancestors who saw more accurately had a competitive advantage over those who saw less accurately and thus were more likely to pass on their genes that coded for those more accurate perceptions, so after thousands of generations we can be quite confident that we’re the offspring of those who saw accurately, and so we see accurately. That sounds very plausible. But I think it is utterly false. It misunderstands the fundamental fact about evolution, which is that it’s about fitness functions—mathematical functions that describe how well a given strategy achieves the goals of survival and reproduction.

 

We’ve been shaped to have perceptions that keep us alive, so we have to take them seriously. If I see something that I think of as a snake, I don’t pick it up. If I see a train, I don’t step in front of it. I’ve evolved these symbols to keep me alive, so I have to take them seriously. But it’s a logical flaw to think that if we have to take it seriously, we also have to take it literally. ...Snakes and trains, like the particles of physics, have no objective, observer-independent features. The snake I see is a description created by my sensory system to inform me of the fitness consequences of my actions. Evolution shapes acceptable solutions, not optimal ones. A snake is an acceptable solution to the problem of telling me how to act in a situation. My snakes and trains are my mental representations; your snakes and trains are your mental representations.

 

Yes cat’s eyes are designed to hunt, with low colour specificity and reflecting retinas to hunt at night with those almond shaped pupils, and insect vision may be even more prosaic but although human perception has evolved by natural selection, human selection has been evolving towards the most generalised adaptable attributes because the human niche is strategically omnivorous of reality. Human perception has thus been consciously naturally selected to be veridical. Visual reality out there is a chaotic jumble of photons that have no colour only wavelength and particulate energy. Human perception has evolved to give us the most socially and environmentally discerning visual theatre of 3-D, size-conserved, seamlessly integrated experience. Yes, consciousness is also a type of internal model of reality constructed by the brain through evolution, but it is a veridical masterpiece and it is not just a model, but an outstanding manifestation of the ground of conscious being in subjective cosmology. There is no better way of looking at so called “physical reality” that we can possibly imagine!

 

Hoffman’s perspective on existence is very confluent with this cosmology. He supports the primacy of the quantum description over the false classical description of “mainstream” neuroscience and advocates what he calls conscious realism. Here follow an excerpt from an Atlantic interview (Gefter 2016) via Quanta magazine:

 

My intuition was, there are conscious experiences. I have pains, tastes, smells, all my sensory experiences, moods, emotions and so forth. So I’m just going to say: One part of this consciousness structure is a set of all possible experiences. When I’m having an experience, based on that experience I may want to change what I’m doing. So I need to have a collection of possible actions I can take and a decision strategy that, given my experiences, allows me to change how I’m acting. That’s the basic idea of the whole thing. I have a space X of experiences, a space G of actions, and an algorithm D that lets me choose a new action given my experiences. Then I posited a W for a world, which is also a probability space. Somehow the world affects my perceptions, so there’s a perception map P from the world to my experiences, and when I act, I change the world, so there’s a map A from the space of actions to the world. That’s the entire structure. Six elements. The claim is: This is the structure of consciousness. I put that out there so people have something to shoot at.  … I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent.

 

The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go. Physics tells us that there are no public physical objects. So what’s going on? Here’s how I think about it. I can talk to you about my headache and believe that I am communicating effectively with you, because you’ve had your own headaches. The same thing is true as apples and the moon and the sun and the universe. Just like you have your own headache, you have your own moon. But I assume it’s relevantly similar to mine. That’s an assumption that could be false, but that’s the source of my communication, and that’s the best we can do in terms of public physical objects and objective science.

 

Not only are they [neuroscientists] ignoring the progress in fundamental physics, they are often explicit about it. They’ll say openly that quantum physics is not relevant to the aspects of brain function that are causally involved in consciousness. They are certain that it’s got to be classical properties of neural activity, which exist independent of any observers—spiking rates, connection strengths at synapses, perhaps dynamical properties as well. These are all very classical notions under Newtonian physics, where time is absolute and objects exist absolutely. And then [neuroscientists] are mystified as to why they don’t make progress. They don’t avail themselves of the incredible insights and breakthroughs that physics has made. Those insights are out there for us to use, and yet my field says, “We’ll stick with Newton, thank you. We’ll stay 300 years behind in our physics.”

 

The neuroscientists are saying, “We don’t need to invoke those kind of quantum processes, we don’t need quantum wave functions collapsing inside neurons, we can just use classical physics to describe processes in the brain.” I’m emphasizing the larger lesson of quantum mechanics: Neurons, brains, space … these are just symbols we use, they’re not real. It’s not that there’s a classical brain that does some quantum magic. It’s that there’s no brain! Quantum mechanics says that classical objects—including brains—don’t exist. So this is a far more radical claim about the nature of reality and does not involve the brain pulling off some tricky quantum computation.

 

The formal theory of conscious agents I’ve been developing is computationally universal—in that sense, it’s a machine theory. And it’s because the theory is computationally universal that I can get all of cognitive science and neural networks back out of it. Nevertheless, for now I don’t think we are machines—in part because I distinguish between the mathematical representation and the thing being represented. As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.

 

But there is another lesson lurking here! The case is not just against veridical perception itself, but the notion of a “real world” that is independent of our perceptions of it, rather than an elusive quantum reality, in which the universe is manifest through our evolved consciousness of it (Mark et al. 2010, Hofmann et al. 2015, Fields et al. 2017, Prakash et al. 2020). The interface theory of perception (ITP) is a filter theory like Huxley’s “Doors of Perception”, of how the brain constructs our internal model of reality.

 

I agree with the central point that our perception is a conscious construction and we need to understand it as such, because subjective consciousness IS primary, as Don says, but if the real world doesn’t in any sense stably exist then genes aren’t real. Natural selection is not either. The whole evolutionary ball game over billions of years depends on the stability of the real world quantum universe over these same time scales or we wouldn’t even have a fitness function to select naturally towards. When we are dreaming, things do change like that, so we can have some sort of understanding of the implications and how impossible everything would be without the real world being real.

 

In interview, Hoffmann roundly dismisses the purely materialist idea that the hard problem can be solved by physicalist versions of the easy problems, and having gained tenure as an expert in perception, sets out to try to formulate a universal theory starting from from the subjective consciousness where the field is less crowded with similar unfruitful ideas:

 

What will happen if we start from the other direction as a purely scientific and rigorous approach. I'm not talking about mysticism or anything like that. I'm talking about can we get a mathematically precise model of consciousness on it's own terms? So then we have to put down mathematical structures, not because they're right but so that we're precise. So that we can then find out why we're precisely wrong. Now you start to make predictions you get. dynamics of consciousness. The test will be can you derive quantum physics from it? Can you get quantum field theory out of it? Because at least if you can you have a mathematically precise solution of the  mind -body problem starting in the other direction.

 

But is it really possible to deduce quantum field theory from subjective consciousness, by taking the same objective mathematical modelling that has proven to work in objective systems outside space-time.

 

Hoffman, Prakash & Prentner (2023) address this question, with a searing critique of theories of consciousness that incorporate assumed and often occluded physical assumptions in physicalist and phenomenalistic accounts alike, targeting in particular, the notion of space-time as doomed:

 

Most scientific theories of consciousness posit physical systems, or functional properties of physical systems. Some propose that these give rise to consciousness; for instance, global workspace, integrated information , and orchestrated objective reduction Others propose that these give rise to the illusion of consciousness; for instance, illusionism  and attention schema. All of the above falsely assume that spacetime physics is fundamental. 

 

But they base their alternative view of conscious agency on two assumptions:

 

We start with two posits about consciousness: (1) there are conscious experiences; and (2) there are probabilistic relations among conscious experiences. These posits lead us to the notion of a conscious agent.

 

While (1) is essential, (2) is a pre-emptive regression into Markov chains, sacrificing subjective experience to the same fate as probabilistic degeneracies of the Born interpretation and Friston’s free-energy notion of homeostatic agency:

 

We make this precise in a mathematical theory of conscious agents, whose dynamics are described by Markov chains. We show how (1) agents combine into more complex agents, (2) agents fuse into simpler agents, and (3) qualia fuse to create new qualia.

 

This leads to a sterile quasi-probabilistic mathematical theory that in no way has the capacity to represent, let alone manifest, subjective experience.

 

Morris (2023) notes however  that Hoffman’s ideas of the subjective are to say the least expressionist, although they do have a ring of experiential truth, and echo the ideas of Jung’s archetypes and Joseph Campbell's masks of the divine:

 

The alternative theory Hoffman proposes is that conscious entities are fundamental entities that exist beyond spacetime. These entities are us. And we are also avatars of a single conscious entity that Hoffman calls the “conscious aleph infinity agent.” We interact with each other via an interface whose format is spacetime. For Hoffman, what’s really going on outside of conscious awareness is so complex, involving non-spacetime dimensions numbering in the trillions or quadrillions. Our simple human minds created an ultra-compressed version of reality stripped of details that would break our brains—if we actually thought with our brains, which Hoffman sees no convincing evidence for. Hoffman’s math leads him to conclude that we are avatars of a superconscious or arch-conscious agent, that puts us avatars through the paces of an infinite number of experiences, no matter how joyous or horrific, so that the arch-conscious agent can experience everything. Hoffman also warns against overidentifying with our self, because the self is an avatar. “You are not any particular experience. You are the potential in which those experiences arise and disappear. That's what you really are in your essence. You transcend any particular experience because you are that potential.”

 

Consciousness and the Quantum: Putting it all Back Together

 

All such purely materialistic descriptions of consciousness fail the manifestation test. Subjective consciousness is primary and incontrovertibly manifest, while physical reality is inferred through it. Thus for an objective description to lay claim to sweep aside the manifest nature of subjective reality is an artifice. It illustrates why attempting to form models of consciousness without addressing the root cosmological status of consciousness itself can frustrate the discovery process. By attempting to form a model which lays claim to the properties of subjective consciousness while denying its existential status, a root fallacy is created. Yes it then does seem plausible to opt for a form of brain-mind identity over any seemingly implausible notion of externalism (cosmic consciousness interacting with the brain).

 

That is why I immediately in June last year came to the necessity of a cosmology, in which subjective and objective reality are complements within a single whole, exemplified by nature itself as a climax process and elaborate the description in full empirical detail from there. We all start from conscious experience as a universal reality and infer the physical universe and its physical descriptions from our consensus consciousness of it. Once we accept subjective consciousness as cosmological in ’nature', the very existence of the subjective mind IS cosmological in nature and assumes it’s natural place without physical contrivance.

 

We know that empiricism is established, both by verifiable empirical observation and by affirmative empirical experience and the semantic notion of empiricism arose this way, because it is naturally evoked in the human mind’s world view. Hence the proposition that subjective conscious volition has efficacy over the physical universe is a natural immediately verifiable statement of our existential condition, manifest as I consciously write this passage.

 

The Central Thesis, makes clear that continuous brain waves, like radio waves and lasers constitute entangled quantum states and explains the interaction of subjective consciousness with the physical brain states moving in and our of wave coherence in terms of neurodynamic phase transition at the edge of chaos coupled to a quantum phase transition between entangled states and projectively reduced quantum states, minimising destructive quantum measurement to address uncertain existential crises the organism faces, leading to a unique form of quantum entangled "anticipation", which complements Friston's model of the brain being a prediction machine to minimise top-down surprise coming from the sensory bottom up, and Graziano's attention schemas by assigning a role to subjective consciousness in anticipating uncomputable environmental uncertainty, through quantum uncertainty itself. In this complementarity, the deduced aspects arising from top down computation and previous experience in memory form the boundary conditions for a critical phase transition, in which wave function collapse I conceived to enable a form of conscious quantum transactional anticipation of future states through space-time entanglement..

 

David Peat (2013) has a good description of these non-local potentialities in relations to living systems at the edge of chaos and non-local forms of entanglement:

 

[L]ocality has served physics well, which implies that our hypothetical underlying non-local effects must be extremely subtle. So where should we look for them? Supposing that true non-locality is present quite generally in the universe: how would it manifest itself? I suggest that the most promising place to look is in these extraordinarily sensitive non-linear systems that are termed "chaotic". Systems which suffer constant iteration and in which every tiny region is contingent upon all others are known to behave in extremely complicated ways. Their dynamics are characterized by fractal behavior--in which endless levels of detail are found at finer and fmer scales. But "chaos" may be a poor term to describe such systems for it is not so much that they are "random", "anarchic", or "orderless" as that they have an extremely complex and subtle order.

 

These systems are so extraordinarily sensitive that the smallest perturbation has an uncontrollable and unpredictable effect upon their dynamics. These are the ideal systems in which to study non-local effects for their behavior could well be controlled by the global correlation of vanishingly small effects. Attempting to control such systems locally proves useless: on the other hand global, or non-local description and control may be entirely appropriate. The global co-ordination of boundary conditions, for example, could act to guide, in a very general way, the overall form of the system.

 

Information which is distributed globally as boundary conditions or very delicate non-local perturbations would be hidden within the apparent chaos of a sensitive system. Attention to individual regions of space would not be sufficient to display this non-local influence. What is required is some new, global description of these systems.

 

A simple image will illustrate this point. Local disturbances propagate through a system and are normally assumed to dissipate themselves, becoming lost in the random fluctuations of the medium. But in a world in which events are correlated non-locally what may appear to be a vanishingly small random fluctuation may in fact be the manifestation of a global order. Extremely small perturbations, when correlated non-locally could, for example, have the effect of initiating an inwardly moving wave of disturbance which then interferes co-operatively and gives rise to a large local disturbance. This is the reversal of what is normally assumed, in which an effect spreads out and is dissipated. I am therefore suggesting that local events in such systems can have a non-local origin. More generally I propose that sensitive dynamical systems may be guided by non-local effects and are best understood using a non-local description. The possibility of non-local control in chaotic and sensitive systems is far-reaching.

 

There are a wide variety of systems in nature that have the characteristics of "chaos", or rather of complex, non-local order. In such systems rather than causes producing effects of a similar magnitude it may be more a matter of very tiny, but globally coordinated causes having a wide range of global effects. The size of the cause is no longer related to the magnitude of the effect, what is important is the overall form of that cause. Moreover what appear as "random fluctuations" may be highly significant aspects of global behavior. Life forms a particularly important subset of these globally complex systems. It may be valuable to consider its processes from a non-local perspective. Life "rides the wave" and lives on a delicate knife edge between order and chaos. Some form of non-local correlation of dynamics is, I would suggest, characteristic of living systems.

 

 

Fig 102: Hippocampal non-rhythmic synchronisation and phase precession (Eliav et al. 2018),

see also (Qasim et al. 2021) fig 78(4).

 

This monograph provides a good understanding of the relationship between conscious and sub- or un-conscious processing that arises from brain dynamics. This is not to put a physicalist gloss on it at all, since I accept subjective consciousness as primary and without it I don't think the universe can become manifest. Conscious brain states consist of global phase angle modulated excitations that are interacting with more diverse excitations that are phase decoherent to sufficient extent that these processes remain local and don't necessarily participate in the central "theatre" of conscious experience but remain a resource that is able to and will do so if either the global dynamic changes, or the local one becomes more coherent with the global. I see this view as productively useful, as it explains consciousness as a central arena having ready access to subconscious levels of processing. It allows edge-of-chaos dynamics to be phase tuned as well as harmonic oscillations as exemplified in the bat hippocampus (Eliav et al. 2018) and in cortical phase relationships in psychedelic studies (Muthukumaraswamy et al. 2013, Tagliazucchi et al. 2016, Toker et al. 2022).

 

We now come to the scientific description of the super-conscious state, there is obviously a physical dimension to it –  if I enter a meditative or entheogenic–contemplative moksha, I can happily understand and describe this neurophysiologically in terms of quietening of the default mode network, and the relinquishing of the subject-environment polarity in favour of a unified state of ego annihilation. I can see it as a natural biophysical state that has become sensitively dependent on the brain’s own inner dynamics, in a biologically natural way, by seeing the notion of the cosmic mind as a shared organismic state that emerges only in such dedicated conditions from the biota, meaning that the cosmic mind becomes manifest biologically. I don't need to articulate externalistic physical force fields to achieve this, because subjectivity is already built into the unified cosmology at the root and can fully utilise the known processes of cortical activity as a contextual boundary condition filter to realise qualia neurodynamically.

 

This kind of complementarity may be akin to a universal integral transform of the state of the entire quantum-entangled universe, or even the AdS/CFT correspondence, so that organismic consciousness is an encapsulated manifestation of the cosmic mind. Such a complementarity doesn't have to be conveyed by hypothetical externalistic constructs to support it, because it is cosmologically complementary to physical force fields.

 

We can then embark on the other half of the empirical discovery of reality with an open mind and discover for ourselves by mutual empirical experience the inner nature of cosmic reality, without placing outer restrictions upon a discovery process that we can all engage transformatively from within, because, in bringing forth what we have within us – i.e. foundational cosmic consciousness encapsulated in our mammalian biological brains – this can be experienced empirically in a discovery process without a priori theoretical or conceptual preconditions.

 

The approach of Poznanski and co-workers flushes the baby of subjective conscious volition over the physical universe we all experience in favour of the rule of a hidden teleology, no more convincing than Terry Deacon’s approach.

 

In summary here is my tentative position about how subjective  consciousness interacts with the universe and causality in detail. This is a working hypothesis, not a proven conclusion but I think it has a counter-intuitive twist that may explain everything.

 

Firstly in quantum mechanics, we have two apparent processes:

(a) von Neumann type 1: the causality-violating collapse of the wave function on quantum measurement.

(b) von Neumann type 2: the evolution of the wave function under a Schrödinger wave equation.

 

There are various versions of QM, from the Everett interpretation where no collapse takes place (but this creates probability multiverses that we don't experience) to Wigner type interpretations where the conscious observer collapses the wave function, to decoherence where collapse results form many interactions. In any event, conscious observers experience Schrödinger's cat either alive or dead in the real word or so it seems, although we do witness superpostion in laser light. Napoleon didn't win Waterloo and we experience a line of history taking place partly as a result of our own actions, which I call historicity.

 

Symbiotic existential cosmology is agnostic about these QM differences, because it imputes primitive subjectivity to quanta, which could then also in principle act as 'observers', but leave this aside and back to the established physics.

 

When we do a simple two-slit interference experiment one photon at a time, as Feynman marvelled at, the individual photons can end up literally anywhere the wave function amplitude is non-zero throughout space-time. If we make a phase space in which the amplitude is normalised they can literally end up anywhere in this space with equal probability, just like a random variable. This is the “free-will of the individual quantum, which Conway & Kochen  (2009) used to imply conscious free will.

 

However, when we repeat the experiment multiple times we discover a pattern, that the photographic plate begins to have bands where the photons ended up, varying sinusoidally with the superimposed wave functions. This is an example of IID (independent and identically distributed measurements).

 

This is how the quantum process converges to the classical in the Born interpretation of large quantum numbers, but it is explicitly violated in both evolution, where each mutation that becomes fixed by selection induces a new context for a subsequent mutation, and in brain dynamics where any quantum event fixed by altering subsequent neuronal activity also changes the context. Again let's leave whether this is possible in the brain aside although I will claim that the butterfly effect, stochastic resonance and global tipping points at critical decisions are ideal ground for this.

 

Let's look at a single quantum 'instance' again. We have already seen that, in the phase space, the quantum can end up anywhere at all. This is the purest form of free-will we can possibly imagine, except that it is shaped by the wave function, so it has context just like our personal conscious decisions.

 

What's the flip side of this? It's that the whole process was being guided by the wave function. Now we get to entanglement. In the ideal interference experiment, we have prepared a pristine experiment in which we have defined the wave function simply and precisely by the macroscopic apparatus, but in the real world there are a multitude of third-party interactions, and in the absence of collapse, each of these introduce entanglements that modulate the wave function. In in special relativistic quantum mechanics the wave function extends into the future as well as the past when a particle is emitted, as the transactional interpretation shows, so it has entangled “information” about the future which causal computation can’t access through local Einsteinian causality, but entangled conscious brain states must have because evolution has conserved this edge-of-chaos consciousness from the eucaryote emergence on.

 

Indeed in the Everett interpretation all wave functions are part of the universal wave function forming an entangled history from alpha to Omega. So the flip side of quantum free will is the 'consciousness' the wave function possesses globally as a representation of potentially the entire history of reality encoded in the subtleties of the wave function. Thus we have a primitive model of quantum conscious free will.

 

Now to entanglement in detail. The results of Einstein, Rosen, Podolsky, Bell and Alain Aspect demonstrate that when two particles become entangled in a single wave function and we make independent measurements of both, we find that Bell's theorem is violated, ruling out local Einsteinian causality between the separated particles but consistent with quantum mechanics.

 

We can do this e.g. with an excited Calcium atom because it has two outer electrons so can transition to the ground state emitting a green and a yellow photon in opposite directions having net zero spin, thus having complementary polarisations. And Aspect did this with time varying analysers which showed that the correlations persisted over space-like intervals that appeared to require local travel between the detectors faster than the speed of light.

 

The conclusion of the Bell results is that: (a) The polarisation statistics of either photon appears individually random, (b) when we match them up whenever we measure one, the other instantaneously adopts complementary polarisation, but (c) this entangled relationship can't be used to send information between the separated particles.

 

There is another point here. Both the separate particles look like they are behaving randomly but there is a hidden process going on which is completely masked unless we sample the two together.

 

Now back to the supposedly causal universe. QM has two interposed processes in causal reality as we experience it:

(a) The Hamiltonian progression of the wave function under the Schrödinger equation.

(b) The apparently random projection of superpositions onto one of the eigenvectors  in collapse.

 

This is what I call 'punctuated' causality after 'punctuated equilibrium' in evolution. So we now have to ask what is the cosmological source of this randomness? We already know that lots of deterministic processes can end up with  pseudo- or quasi-random distributions. Computers use these to do random simulations and classical deterministic chaotic processes have the property of ergodicity, converging to a space-filling 'thermodynamic' trajectory characteristic of stochastic systems. So there are a huge swathe of complex processes that could underlie quantum uncertainty that could distribute in the limit to be quasi-random, masking any apparent 'hidden variable theory'.

 

Now let's go back to supposed causal closure of the universe in the brain context. In the quantum universe, notwithstanding the ebb and flow over Libet’s claim, and its later refutations, it is not possible to make any practical empirical experimental test to confirm causal closure of the brain, so the onus on science to demonstrate it fails.

 

Causal closure might appear to make superficial sense because, unless it IS causally closed, we could have both mind and brain dynamics affecting future brain states, resulting in causal conflict. But if it is confined only to circumstances where the brain dynamics is critically poised in uncertainty, because it is representing an existential crisis where the causally-induced alternatives are finely balanced and the global brain state (in mutual phase coherence feedback with action potentials modulated by stochastic resonance) is at a sensitively dependent unstable tipping point, this corresponds quite closely to a quantum measurement of its own instability.

 

So we ask this question: "What if the only thing subjective consciousness has the capacity to do is to perform quantum measurements on its on brain's unstable brain states? Does this invoke any form of useful mind-matter interactionism or is it just futile randomness?

 

Here I think that it has a huge capacity to be exactly what we are all looking for, because the universe as we know it is paradoxically 'punctuated causality' – literally only half causal and half uncertain in structural terms, so although we don't expect quantum measurement to force any outcomes to cause the cat to be alive rather than dead, it does have a profound effect on the evolution of the universe, turning it from a multiverse into one where Napoleon didn't win the battle of Waterloo, and where Nelson turned his blind eye on the Danish fleet in the Battle of Copenhagen.

 

Of course some of these events come down to hidden physically causal factors like the positioning of troops or the silly height of the Spanish ships in the Armada's defeat, but the key role of conscious experience, as Michael Graziano's attention schema theory points out, is split-second intuitive life or death decisions, when the shark strikes. We know the brain is also a predictive perceptual computer as shown in the "flash-lag" illusion where it sometimes makes predictive errors, but all this comes down in the end to an intensely conscious split-second real time decision which is going to alter the course of history in exactly the way a quantum measurement does.

 

Note also that this process emerged in pretty complete form with the founding universal eucaryote ancestor, complete with membrane action potentials, edge of chaos excitability on a global basis, the genes to enable synapse formation and the neurotransmitters as social signalling molecules, and as I noted, the brain is an intimately coupled society of such cells (King 1978, Edwards 2003) operating in essentially the same way.

 

Human neurons, and by extrapolation neuroglia, are extremely sophisticated human single-celled animals in their own right nurtured by their social environment. If you look at a pyramidal neuron spanning the cortical layers  (fig 122), they are modulated by all the principal neurotransmitters from GABA, through serotonin, nor-adrenaline and even dopamine and the inputs of interneurons as well as fast ionopore receptor fluctuations of NMDA and AMP. Their axonal output is thus an expression of multiple dynamic forces. They are not simply additive Mcculloch-Pitts units and that is why the cerebral neurodynamics is the edge-of-chaos phase tuned dynamic it is.

 

This means that the brain is conscious at the cellular level and the binding problem is all about how the neuronal and glial excitations are bound together in the overall brain dynamic, which is in phase-coupled feedback between graded cortical potentials in the local EEG and individual action potentials. So we come back to meaning and this means the meaning is in the purview of conscious experience each cell has in its neurosystem context. I see this as a true conscious meaning, but it is a wildly different meaning to the meaning we experience, because neurosystems meaning is a coherently bound dynamics of cellular meaning and the cellular meaning is how the conscious excitability of individual neurons interpret their intimately bound social environment. This is a cellular version of Freeman dynamics and it’s very hard for us to make meaning out of it at our coherently bound neurosystems level, but in it lies the solution to the entire dilemma of what consciousness is, the binding problem, and how meaning arises in the multi-celled conscious organism.

 

Taken to a rather poetic conclusion, we are walking inflated quanta exercising our free will in just the way each individual photon does, except we are doing it in a supremely entangled way that brings the conscious moment into focus in every detail of our neurodynamic internal model of reality representing the mysterious quantum world around us and the extreme vagaries and computational intractability of the open environment problem, which is worse than the travelling salesman problem because there isn't necessarily only one outcome, but multiple threads, all or any of which may result in annihilation or ultimate survival. You could even speculate that our entire incarnation, from birth to death, is a single quantum measurement, particularly when looked at sideways on in space-time, as the Feynman representation tends to do!

 

How the Mind and Brain Influence One Another

 

To summarise and complete, here is a short discussion with the psychophysicist Stanley Klein:

 

Stan: There has been a lot of evidence that mind is being done in the brain. There are indeed a number of folks that think that mind is separate from the brain. Of course the brain is getting input from neurons outside of the brain like from our eyes and ears, etc.  I look forward to hearing evidence that points to mind coming from outside of our brain.

 

Chris: To suggest that mind is outside the brain, if you are a physicalist, means you are asking for the subjective mind to physically exist somewhere outside the brain – I.e. in the toes or round the corner somewhere. You know that’s not how it works. Mind is inside and the brain is outside. Both are happening depending on how we choose the discourse i.e. to talk about our experiences, or the physical world we are perceiving.

 

And Stan, just what is the extensive evidence that mind is being done in the brain? There is no way of escaping that any brain research on a conscious individual mind, is happening when an experimental recording is made of brain dynamics e.g. Freeman dynamics.  But this isn’t mind being done by the brain – just accompanying brain states associated with conscious activities.

 

Stan Klein:  However, I don't fully understand what you said about the mind-brain connection. Could you provide more details of what you have in mind.

 

Chris: The trick is in how the brain uses dynamical instability computationally. It’s attempting to form a causal model, but the open environment is computationally intractable and indeterminate. The single celled eucaryote learned to use edge of chaos membrane excitation for predictive sentience, by being sensitively dependent on external quantum modes to generate a physical awareness of its environment. As the amoebo-flagellates evolved, they became genetically adapted to take advantage of this predictively, gaining perpetual survival opportunities. I see this as being non-IID at the cellular level so its a genuine quantum process at the edge of chaos that aids survival because the membrane excitation feedback process under genetic control becomes predictive in ways that involve similar situations to weak quantum measurements and expand instantaneous time into a quantum of the present.

 

In a complex brain, each of the neurons are carrying out an exceedingly complex social version of this, that grew out of neurotransmitters as social signalling molecules, so the social signalling has become a wired form of synaptic signalling driven at high speed through axonic and dendritic connections. The result is a Freeman dynamics brain at the edge of chaos that seamlessly uses a combination of self-organised criticality to resolve uncertainties and making quantum measurements through wave phase coherence of neuronally networked populations. Because it evolved from adaptive unstable cellular consciousness it seamlessly integrates computational and quantum predictive dynamics. 

 

We experience this as "the present”.  This is just a description on the fly of neurodynamics that passes far more sophistication onto the neurons themselves as highly evolved human cells with extreme delicate social connections simultaneously using all the social signalling molecules from glutamate through GABA serotonin and so on.  But the key to understanding it is not brain dynamics alone but brain development and the roles the neurons of each neurotransmitter type have in mutually organising the social network of synapses.

 

Stan:  Many thanks for your detailed discussion of many details of how the brain does its thing. You seem to be agreeing with me that the brain is where the action is for producing the mind.  As I've said many times, present science doesn't yet fully understand how the brain does it.  Do you agree with the above or do you think the brain isn't producing mind.?

 

Chris: No I don’t agree Stan!  The brain and mind are producing one another. The brain is not causally closed and the mind is transcausally volitional. By transcausal I mean the result is a product of a transactional process between past and future underlying quantum reality and wave function collapse.

 

The brain, as a developmentally and dynamically structured set of conscious neurons, is providing a partially open dynamical computational context in which the mind fills in the unstable parts of the quantum dynamic. This is being driven by tightly-coupled cellular sentience on the part of neurons and their complementary neuroglia. This means that, from the mind's point of view, the brain is a boundary condition acting as a filter on the way the mind can volitionally act. That’s why we look out at the conscious experiential world we perceive as individuals. By act I do not mean causally interact in the sense of interactional dualism where we can define functional mechanisms on both sides. Nor is it simply dual aspect monism because the complementary processes are very different, with the subjective able to be experienced bu neither observed objectively nor decomposed.

 

The nature of the mind is related to and complementary to the underlying “volitional” process that determines wave function collapse. This means again that mind is complementary to the universe as a whole. We appear to have our own mind, not because there are many minds, but because the many brains impose varying boundary constraints, as encapsulations on the subjective condition.

 

Stan: Chris, You said:  "By transcausal I mean the result is a product of a transactional process between past and future underlying quantum reality and wave function collapse." Why did you include "future"?.  Present science doesn't allow information to go backwards in time or faster than light. But strangely enough some influences can go faster than light. Wonderfully crazy.

 

Chris: The Feynman description and transactional interpretation both involve hand-shaking between past and future through the special relativistic wave propagator and offer and confirmation waves. All quantum processes are time reversible. Weak quantum measurement is an averaged out process which involves retrodiction post-hoc. It’s still partly IID because of the repeated measurements to establish the 'Bohmian’ trajectories. But cellular sentience is a feedback loop at the quantum level that isn't trying to make a classical prediction, just to exert anticipatory volition, escaping the snake strike, so it's basically an inflated version of context dependent wave function collapse, including advanced influences from absorbers. I’m not being too precise about this because SEC is agnostic as to quantum interpretations.

 

You are happy with entanglement involving instantaneous definition of the state of the remote particle, when the nearby one has its polarisation measured because this doesn't involve a classical signal exchanged faster than light. It simply reveals a hidden correlation that now has become determinate. But this actually occurs because the advanced waves from both detectors arrive at the source of the entangled photons and the changed result is referred back in the retarded wave to instantaneously define the remote particle’s orientation. So that is the way it all works.

 

Real positive energy determines the arrow of time and the direction of classical causality, but collapse of the wave function punctuates this. So if you are talking about real positive energy particles, forward causality is true, but if you are talking about collapse of the wave function it isn’t. For example virtual particles don't conserve mass-energy and don’t respect the arrow of time because they are not real positive energy so they interconnect past and future symmetrically in precisely determining the electromagnetic force in QED.

 

Because mind is complementary to quantum uncertainty, it is complementary to the transactional milieu. This doesn't mean it can send classical signals from the future to the past, but it means that its volition can and does reflect these ‘virtual’ influences. That’s why we intuitively feel and perceive we have active agency over the universe and veridically perceive this to be the case and perceive that we are acting predictively by the conscious presence of mind, while also plotting out the next move, where possible cognitively. It may seem counter-intuitive that standing in for the apparent randomness of uncertainty is all there is to volition but that’s enough to completely determine and to utterly change the course of history, as we know.

 

Stan: Chris you said: "As you know I also have a relationship with Psi. It’s just a crazy knack that I seem to possess"

If you can demonstrate your ability to do Psi, you can get a Nobel prize! Can you give an example of your Psi ability?

 

Chris: If we con’t own our own consciousness I don’t think it right to say we possess Phi but we are responsive to it and share its innate capacity. I simply said it was a knack I possessed. That knack is open mindedness to uncanny correspondences in the flow of ongoing experience.

 

I am happy with Radin, Bancel & Delorme (2021) producing research supporting an influence of mind on entanglement and other Psi results that may have statistical significance. But these are attempts to produce verifiable results, indicating an effect under repeated instances, which make them statistically significant. This tends to be like an IID (independent identically distributed) quantum experiment that converges to the Born probability interpretation, but it depends on the exact nature of the experimental process undergone.

 

I really like the simple concise treatment of Gallego & Dakić (2021) showing that, in non-IID processes, the quantum description can prevail in macroscopic situations. So let me try to explain where I see what people call Psi is coming from. I am particularly interested in causality-violating conscious experiences that involve time and implicit anticipation, particularly under shifting contexts in real life where each event changes the context, so there is no IID. This is going to strike these all out of scientific proof because we can’t apply statistical analysis to non IID events as they are adventitious, as we know with evolutionary mutation, but there is a reason why this is likely to be critical.

 

We accept that the brain has evolved to be a predictor of environmental crises of survival and opportunities for food and reproduction.  We know the perceptual brain has evolved to be a predictor of emerging situations, evidenced by anomalies such as flash-lag illusions and mainstream ideas such as Graziano’s attention schema theory AST. This makes excellent evolutionary sense and explains why the brain has evolved to ensure the survival of the organism, through massively parallel computing that doesn’t stall in an exponential runaway, like serial computers facing the travelling salesman problem, so like Anil Seth we can describe the brain as a kind of prediction machine, using consciousness efficiently do do its work in real time, even if this is a kind of hallucination for efficiencies sake.

 

Now we come to the hard part. Evolution has not only selected massively parallel real time predictive machines, but subjectively conscious ones. So the same argument has to apply to subjective consciousness. So we have to address the question of why evolution has retained subjective consciousness all the way to mammals, primates and humanity, apparently universally across metazoa if subjective consciousness in itself has no predictive power over and above objective brain function.

 

The hard problem exists because (1) all our experience, including all knowledge of the objective physical universe is derived subjectively, (2) subjectivity is categorically inconsistent with pure objectivity and cannot be finessed into it by any Zeno’s paradox easy problem approach, and (3) edge of chaos brain dynamics, combined with phase coherence processing mean the physical brain is a self-critically unstable system, uncertainly poised in the very conscious states we are considering, making proof of causal closure impossible.

 

So subjective consciousness must have a critical predictive advantage for it to be universally retained as central to brain function sitting right in the centre of the cyclone of edge of chaos.

 

When we walk down the street, we can consider that we are highly determined by our circumstances, going to the supermarket for example 50 m down our side street from here. We may be thinking some very obvious thoughts like worrying about Ukraine, but when we turn the corner just about anything can occur, including nearly getting run over.

 

So about half our active lives are spent dealing with defined causal aspects and the other half are accidental things that come in from left field. This is living in the quantum uncertain conscious universe. The causal circumstances are the Schrödinger equation part and the accident waiting to happen is cat paradox collapse of the wave function. Because our brains transcend IID, both processes are playing out seamlessly together. Brain computational predictivity is there to deal with the Schrödinger part and subjective consciousness to deal with causality-violating quantum collapse part of daily existence.

 

To be able to be predictive, subjective consciousness needs to be reflected in the sort of interactions described in the transactional interpretation and special relativistic Feynman formulation, where tellingly, even the exactly determined values of the electron magnetic moment are found with stunning accuracy by integrating over both past and future components of the wave functions. The transactional approach which deals with real particle exchanges gives an intuitive picture of this implicit predictivity, but it all comes down to how a many-to-many transaction collapses into a single real particle exchange and whatever way we look at this it is a collapse from potential past and future states to a single set of pairs of these, so it’s not moving either direction in space-time but across it transversely. As I noted above, I don't have the final answer on this and suspect that no causal description is possible although I call it transcausality, because to do so would be to commit a causality violating space-time process to be causally explained, so it may forever be hidden in the entanglement which extends to the wave function of the universe as a whole (Hartle & Hawking 1983). However in this picture, we as subjectively conscious beings are INSIDE this entangled phenomenon and are thus intuitively conscious of it in our changing circumstances.

 

This means that our circumstances, which the Eastern Wisdom Tradition call karma (although with a more moral tone) are a product of cosmological entanglement – in the mysterious hidden variable problem. That's all well and good, but we know about Bayes’ theorem, so many people discount subjective reports of coincidence, or synchronicity as being false predictions resulting from selecting only the verifying cases and ignoring the contradicting ones. The trouble with this argument is essentially that it only works with IID processes which we know converge to the Born probability. When we are dealing with sheer idiosyncrasy, we are dealing with non-IID quantum "science fiction", but subjective conscious volition is anything BUT science fiction. If anything neuroscience is fiction denying volition. But the key point here is that we can't use Bayes’ theorem estimates in singular on-IID events.

 

So what is the answer to your Nobel prize-winning question? Well it's this. If we allow ourselves to entertain the primacy of consciousness, we enable our minds (and brains) to enter into a heightened form of intuitive awareness, where some of our attentive, perceptual and cognitive efforts go into actually looking at, and entering into, the flux of experience. Put in a very clunky way by meditators, who are far too disciplined in their one-pointed focus to appreciate the full dimensions, unless they completely abandon themselves to the abyss, this is called mindfulness. That is allowing our mind to simply resonate with nature in its vast space-time ramifications arounds us, as animals do, to make sure they can actually hear the hiss of the snake strike over the swishing of the long grasses and the chirping cicadas.

 

So what about my Psi? Sometimes I have really striking qualitative experiences which have the character not just of everyday coincidence, but something else at the bottom of the billabong as the Australians say. Precognitive dreams that are registered and come veridically true are examples. I hated writing a song that later proved to have tragic precognitive echoes of specific details of 9-11 that can also be explained to some degree given the subject matter, but the qualitative details remain uncanny to this day although similar precognitive results are cited by Bem (2012) and  Bem et al. (2016). These results have been analysed by Rabeyron (2020).

 

But why would we spend time speculating on this? In the same way, subjective consciousness is transformative over the physical universe, through efficacy of volition, we don't have time to save the world from a perilous fate dwelling on magic tricks. We need to do the good thing for life as a whole, while we are here and time is short. But Brahman accompanies me as I labour and the great virtue of entheogens is that the whack they give under the right circumstances can last a lifetime, or at lest a seven year fast, because, when you fall outside the bubble of perception, you are never completely in the closed causality box any longer.

 

But there is more to this. The circumstances of the world may look like they are hugely deterministic laws of mass action, leaving us helpless. Empires rise and fall and their huge armies with them, in clashes of the titans as if everything is brute force, of tectonic fire and tsunamic flood, but consciousness and the human conscious world view are both transformative and critically unstable. When Nelson turned his blind eye to the telescope in the Battle of Copenhagen, an alternative history was created. Perceived realities are in flux. The Weltanshauung of Immortality is an infectious concept with a pandemic R0 very delicately poised at 1, just reproducing itself without extinction. One tiny shift and the entire flux of history can be transformed back to immortal Paradise. So the entire stakes are caught in a single cat paradox experiment that encompasses all of us.

 

Stan Klein: I’m pleased that we are in general agreement with standard science. One topic where there might be differences is on the topic of psychic phenomena. Do you think that telepathy might be possible, in violation of the presently known 18 wavicles?

 

My response is sheer speculation, off the cuff. If Bob tries to run a telepathy experiment and thinks a specific planned thought and ‘sends' it to Alice and something enters Alice’s head out of the blue, this is very hard to distinguish from a quantum uncertain 'cat paradox’ event, and Alice's brain would have had to be in a highly uncertain dynamical state, because otherwise the causal circumstances of her brain function are forcing her brain to see Bob’s thought. If that were the case it would take a strong wavicle interaction we could probably measure or falsify. But I don't think that’s the case, even if Psi experimenters can get a sigma out of their experiments.

 

But that’s not how I think this works. I think the possibility is that conscious brains use quantum entanglement and can sometimes share entangled states. This again is off the cuff, but I would see brain states as dynamical quasi-particles corresponding to phase coupled global excitations. Now the actual more common context is not an experiment, but two people who know one another, say it’s me and my mum, because this happened a lot when she was alive. Because we know one another (except she’s gone) there are certain dynamical brain states that can become ‘engrammed' in memory, like my sense of my mother’s presence and her sense of me, her son, that form a kind of familial collective consciousness.

 

So then I suddenly get the idea to call my mum because I haven’t done so for a while and that's this quasi-exciton emerging in a non-IID way out of the ‘engram’ milieu, and I call her, and her phone rings and that sets of the other entangled part of her quasi-exciton that got entangled with mine last time we spoke, and of course a call from me is on the cards too, and times before that. So I say “Hi and she says “I was just thinking of you!”  This is also the Aboriginal dream time speaking. They (and twins) suddenly notice something is amiss in their psyche and realise their uncle, or sibling, has passed away somewhere far off, although Bayes' theorem might claim she was always thinking of me!

 

Modern culture is very bad at this kind of thing because we either think flat stick in a mechanistic way or we meditate in a controlled mindful way and never really let the winds of uncertainty pass through our consciousness any more. So both spiritual practices and practical realities can block our sensibilities. The key point here is how are we going to disprove this using the standard model? It’s just as bad as the hard problem. If quantum anticipation is possible and conscious volition is real, all these other possibilities are on the scientific table top of reality, as Carlos Castaneda put it, at the far edge of brute certainty!

 

Consciousness and Wave Function Collapse

 

An important feature of our description of both the Schrödinger wave equation determining particle probabilities and the collapse process resolving them into eigenvalues, is how particle states are created and annihilated. For example a Geiger counter is particle annihilation in a scintillation count and in a quantum interference experiment the photon is created in a hot filament, or from a voltage drop in a light-emitting diode, traverses the slits as a wave and is then annihilated as a particle by an atom on the photo-plate becoming excited. These are described effectively as discrete transitions in multi-particle states and quantum field theories.

 

If we start with the Schrödinger equation for the potential plus kinetic energy of the one-dimensional time independent quantum harmonic oscillator:

 

 

 

and change variables, by letting   ,  we have

But we can confirm that  , so if we let

 

 and  

We have    So

 

In quantum field theories and many-body problems, one works with  and  – the creation and annihilation operators of quantum states, for both bosons and fermions. In second quantisation, with occupation numbers for indistinguishable states, the occupation number  for fermions can only be 0 or 1, due to the Pauli exclusion principle, while for bosons it can be any non-negative integer. This gives rise to the bosonic photon fields of quantum electrodynamics  and the fermionic Dirac fields of elections and other fermions.

 

These operators become:

Bosonic:  

Fermionic:

 

Where  is the Kronecker delta, [gh] = gh – hg is the commutator and {g, h} = gh + hg is the anti-commutator.

 

In quantum electrodynamics, fig 71(e) the field is described as a fluctuating sequence of virtual particle states bounded by the incident and emergent real particles. Applying energy to the field can result in some of these virtual particles becoming real, as in radio emissions from transmission towers. In this description the field and its network of real and virtual particles is clear, but the wave functions are hidden in the Green’s function propagators which determine the probabilities governing the virtual particle interactions as described through creation and annihilation operators.

One way to do quantum mechanics is to calculate a wave function and operate on it with quantum operators. Another way is to directly consider amplitudes for a given process, such as ‘the amplitude that my particle starts at point  at a time  and ends up at point  at time  . This amplitude is known as a propagator. Propagators represent an alternative to wave functions that enables us to extract all of the same information and more.

 

The time-dependent equation that governs the change of a wave function  is the Schrödinger equation .  Each Green’s function takes a wave function at some time and place  and evolves it to another time and place . We integrate as follows to take into account the superimposed path integral effects of the Green’s functions: . This means that the wave functions are still in the process, but being used to calculate the ongoing particle amplitudes at all space-time points.

 

Interactions, such as the absorption of a photon, e.g by an atom on the photographic plate, are described as discrete transformations and the linear Schrödinger wave equations are just rules for calculating probabilities of particle interactions. Thus there is no actual wave function description of the entire quantum interaction in the field description, as in the transactional collapse in fig 73, where the offer wave is linear and spreading with respect to the emitter and the confirmation wave is linear with respect to the absorber.

 

A central question in the universe is just how much collapse takes place independent of conscious observation of it. The standard way of dealing with collapse, without requiring consciousness, is to cite decoherence through interaction with third-party quanta. Zurek (1991) initially proposed a simple exponential collapse of the off-diagonal elements as a quasi-explanation for the loss of coherence into classical eigenfunctions and has elaborated this since into quantum Darwinism by mire robust states (Zurek 2009).

 

The trouble with this is that, as described in quantum computing scenarios, and in the Feynman notation (via the Green’s function propagator) , third party interactions are still treated as effective wave function entanglements which can deepen rather than solve the problem. In Feynman notation for example particle scattering is the same diagram as electron positron creation and annihilation and annihilation IS treated as wave function collapse via the discrete operator involved. This means that any interaction with third party quanta can be interpreted as either (1) collapsing the wave function or (2) as introducing multi-particle entanglements as in GHZ and W-states.

 

If we take a decoherence view of the cat experiment, a single radioactive particle quantum tunnels as a wave out of the nucleus and then triggers a Geiger counter by interacting as a particle to induce charges that can be picked up and used to classically break a flask of HCN killing the cat so that when we open the box it is dead. This is the chain that von Neumann, is contemplating, in his sweeping spread of possibilities where “collapse” could take place, anywhere from the first particle interaction, to the consciousness of the observer opening the box and seeing the cat.

 

The nub of the problem is that it is only in our conscious view of the cat experiment, or any other quantum measurement experiment, that we see the projection on to eigenstates occurring in our particle-dependent “classical” view of the conscious universe. We can imagine that the photons in the Sun's light come from collapsed wave functions, and we can imagine that cat paradox collapse happens the moment the radioactive particle causes a charge shower in the geiger chamber but we don't actually know. Speculation is not physical science! So before intelligent life evolved, the entire universe could have remained wildly entangled without there being any contradiction. That’s what the Everett interpretation and its seamless undetectable branching confirms. The same with the Geiger counter states.

 

Now we can get around this by asserting that decoherence is a primary source of wave function collapse OR multi entanglement, but we really can't tell the difference until we open the box. If we take Zurek's off diagonal elements fig 71c. decaying and examine this in terms of multi-particle entanglement, the original two particle entanglement does become degraded by further entanglements due to extraneous added correlations confounding the simplicity of the original entangled state towards classicality, so the entangled conclusions also look like collapse to an eigenstate.

 

Again this doesn't mean that consciousness is the only thing that can lead to collapse, because decoherence is setting it up smoothly, if in fits and starts, all the time through multi-particle “entanglements”. Hence other quanta and macroscopic systems can act as “observers” too. This tallies very neatly with macroscopic instruments precipitating the process, but it leaves open an undecidable question – just how collapsed is the physical universe independent of our perceptions of it? The only thing we are certain of, is that the cat to our conscious perception looks vivaciously alive, or dead as a door nail, as well as the bushes waving in the wind as we walk down the street and the unexpected rain shower that is breaking around us. That’s right where the cookie crumbles for physics.

 

All Randomness arises ultimately from Quantum Uncertainty

 

Randomness has been described as having two sources, one epistemic about our state of knowledge of a system and the other ontic, about the actual physics of reality, perhaps consisting of two forms, quantum uncertainty, and the supposed randomness of molecular kinetic processes, but these are actually all sourced in quantum uncertainty.

  

If we think of a chamber filled with helium atoms and consider one atom in the chamber, viewed classically, this is 3-D billiards and we know multi-body billiards is chaotic because small differences in the position of any ball colliding with another causes larger deviations in their positions in the outgoing trajectories. If we view this quantum mechanically, it is a 3-D interference experiment in which the apparatus is all the other atoms and the chamber itself. Suppose we release a single helium atom through a very small aperture at time zero. As it proceeds through the chamber, its position becomes indeterminate through wave spreading in the same way a photon does.

 

Fig: Molecular interference demonstrated for three large molecules (Gerlich et al. (2011)).

 

This is interesting and requires further thought because it is a complex system with a lot of bound particles, but the experiments on large molecules show clearly that this is taking place. This means that successive collisions result in exponentiating increases in the uncertainty of position and the indeterminacy of the trajectories, so the entire concept of the atoms as “particles” having some other kind of randomness is derived from chaotic amplification of the uncertainty of position of each of the atoms in the chamber. This effectively means that all the perceived randomness in the kinetic process was derived originally from positional quantum uncertainty, also amplified by the chaotic boundary conditions of the interacting atoms.

 

As far as I can see this process extrapolates all the way into real life, where we walk around the corner and nearly get run over on the way to the supermarket, because all these uncertainties in life, although we think the universe looks “classical” are just larger more complicated instances of unmitigated amplified quantum uncertainty, obviously including mutation, ageing, cancer and mortality itself, as well as winning the lottery!  So I think we just have two sources of “randomness” the epistemic form is just partial observation through sampling subpopulations and observational uncertainty to do with the vagaries of the ontic form above and quantum uncertainty itself as the ontic form. Notice also that this means butterfly effect systems are really using quantum uncertainty to generate their ergodicity, so tornadoes are inflated quantum systems that might also be "conscious" if subjectivity also occurs at the quantum level through wave function sensitivity throughout space-time and wave function collapse.

 

To assume this is all random is an extremely dangerous assumption if we don't actually know. Lots of classical processes can appear quasi-random and are used as random number generators. Quantum measurement also has features of ergodicity that make us use the notion of randomness normalised by the wave function amplitude to explain the probability interpretation, but that’s a gross simplification. Schrödinger's cat is either very much alive or stone dead when we view it. It’s not an alive/dead superposition, nor is it in a random state. Entanglement gives us a hint that more is going, on because the detector stats at either end show apparently random polarisations but when we do Bell theorem sampling we find they are powerfully correlated, even ‘deterministically' complementary, when we sample at arbitrary relative orientations. Clearly uncertainty can and does handle multi-quantum entanglement which is called decoherence and it probably pervades the entire universe and its compound wave function.

 

Hidden variable theories assert that there is a process underlying quantum uncertainty, which is by default assumed to be "random", but the onus on scientific proof lies as much with establishing such a source of "pure" randomness in the universe, as it does finding a succinct hidden variable theory transcending those like the pilot wave theory in a full explanation. The transcausal aspects of transactional quantum collapse may make such a naked theory impossible to establish, meaning that both the hidden TOE and assumed randomness become undecidable propositions which intuition can penetrate empirically, but logical proof cannot.

 

So then we come to karma. It’s not moral, but the ultimate manifestation of quantum uncertainty in why things just happen to us.  Until it is proved otherwise, I take the position as an 'avatar' that the living present is a unique karmic instance and we are treading on thin ice every step of the way. So the answer about enlightenment is not to vacate your karma, but to look very carefully long and hard into it, because the bell tolls for us.

 

We can’t SEE Schrödinger Cats, but we can FEEL them!

 

Fig 102c: The BEC juggling and Sapphire jiggling experimental results.

 

Here is a possible denouement to the cat paradox … this is a discovery process in the completion.

 

A BEC is an unlimited number of bosons (or integer spin atoms) in the same wave function, like a laser, and as we know, giving lectures by laser pointer can last as long as we push the button. But we can prepare a system in a superposition of states e.g.  a quantum dot or BEC and hold it as long as we want in this entangled state until we choose measure it in some way through a particle absorption. So what IS an experiment using temporal separations falling within Heisenberg uncertainty limits? We’ll see in a paragraph or two as the cat collapses and the penny drops!

 

Kovachy et al. (2015) throw a Bose Einstein condensate up like a juggling ball so two cusps end up in a spatially separated superimposed state:

 

We achieve long free-fall times by launching a Bose–Einstein condensed cloud of ~105 ultracold 87Rb atoms into a 10 m atomic fountain using a chirped optical lattice24. After the lattice launch, we use a sequence of optical pulses to apply a beam splitter that places each atom into a superposition of two wave packets with different momenta.

 

The sapphire experiment (Bild M et al. 2023) poses yet another situation where we have an acoustically oscillated atomic lattice that can end up superimposed between two oscillatory states with 1017 atoms involved.

 

Schrödinger's original thought experiment was not just so that two macroscopic states were superimposed but diabolically, so that the two states were biologically impossible, since live cat / dead cat is “unlawful" for the cat to exist in this superimposed state. Contrast that with the sapphire, where the two states are two very slightly different arrangements of a molecular lattice where there is no more inconsistency than in an ordinary interference experiment.

 

Now in a cat paradox experiment, we simply have a Geiger counter and a weak source emitting an average of 1 particle per sec and we leave the cat in the box for 0.5 secs and open it. But the cat is also a conscious organism, so what has actually happened is that the Cat starts to smell cyanide and at that point its fate is sealed, so the cat made the first conscious observation by detecting HCN molecules by smell and it has nothing to do with the experimenter opening the box.

 

John Kinemann suggested that a form of autopoiesis might prevent an organism becoming superimposed:

 

But it says, "That means a true Schrödinger’s cat is still far from being realised."  Meaning it has not been demonstrated for a living organism.  Nor do I think it will be because life preserves quantum superposition for decision making, but also closes the causal loop to preserve life and identity. That decision has already been made by the organism. If every aspect of the organism were a real time choice life could not be sustained. So an actual cat is much more complex than any of these lab experiments that demonstrate only one necessary principle.

 

I replied that free will has to collapse the wave function because it’s the only way to affect the universe without causal conflict with brain function. It doesn’t need further causal loops except as an indirect effect of this constraint.

 

Now, on reflection, I can effectively take John's argument and turn it inside out. We are witnessing superpositions of states all the time when our brains reach an unstable butterfly-effect-sensitive dynamical state through just the uncertainty window CN is pointing to. This is what we call making an uncertain intuitive decision, where the unconsciously competing factors interfere. Pair-splitting EPR experiments are a distraction, because they are designed to demonstrate entanglement Bell violations, but superposition is manifest in all uncertain situations. So when we have an “Aha!” moment, or when we simply make an arbitrary, idiosyncratic or creative choice, we are collapsing a superposition we have actually perceived internally through our very sense of mounting “uncertainty”.

 

It is this transition that we are experiencing all the time, each of which IS a Schrödinger cat before during and after collapse. We can't see these Schrödinger cats because they are hiding in "plain sight" in our sense of decision making, just as I came to this conclusion, formulating to myself what the hell is going on with these goddamn cats and why we can't see them!!! So the answer is that we can't SEE a Schrödinger cat but we FEEL them all the time and that's what intuition IS! So the real world is not as classical as it appears, but as quantum uncertain as our inner feelings indicate.

  

TOEs, Space-time, Timelessness and Conscious Agency

  

In the 20th century, two theoretical frameworks emerged for formulating the laws of physics. The first is Albert Einstein's general theory of relativity, that explains the force of gravity and the structure of spacetime at the macro-level. The other is quantum mechanics, which uses wave-particle complementarity a Hamiltonian wave equation and probability principles to describe physical phenomena at the micro-level.

 

Quantum field theory is the application of quantum mechanics particles and forces such as the electromagnetic field, which are extended in space and time, modelled as excitations in the fundamental fields. One computes the probabilities of various physical events through perturbative quantum field theory using Feynman diagrams (fig 71e) depicting the paths of point-like particles and their interactions.

 

Einstein's general theory of relativity treats time as a dimension on par with the three spatial dimensions; in general relativity, space and time are not modelled as separate entities but are instead unified to four-dimensional spacetime. In this framework, the phenomenon of gravity is viewed as a consequence of the geometry of spacetime.

 

Canonical quantum gravity is the attempt to quantise the canonical formulation of general relativity (or canonical gravity) as a Hamiltonian formulation of Einstein's general theory of relativity. All canonical theories of general relativity have to deal with the problem of time. In quantum gravity, the problem of time is a conceptual conflict between general relativity and quantum mechanics. In canonical general relativity, time is just another coordinate as a result of general covariance. In quantum field theories, especially in the Hamiltonian formulation, the formulation is split between three dimensions of space, and one dimension of time. Roughly speaking, the problem of time is that there is none in general relativity. This is because in general relativity the Hamiltonian is a constraint that must vanish. However, in any canonical theory, the Hamiltonian generates time translations. Therefore, we arrive at the conclusion that "nothing moves" ("there is no time") in general relativity. Since "there is no time", the usual interpretation of quantum mechanics measurements at given moments of time breaks down. This problem of time is the broad basis for all interpretational problems of the formalism.

 

Fig 103: Feynman diagram vertices become 2D string surfaces.

 

String theory is the idea that the point-like particles of quantum field theory can also be modelled as one-dimensional strings. The interaction of strings is defined by generalising the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a 2D-surface. Unlike in quantum field theory, string theory does not have a full non-perturbative definition, so many of the theoretical questions remain out of reach.

 

The original version of string theory was bosonic string theory, but this version described only bosons, a class of particles that transmit forces between the matter particles, or fermions. Bosonic string theory was eventually superseded by superstring theories which describe both bosons and fermions, using supersymmetry in which each boson has a counterpart fermion, and vice versa, but a symmetry-broken asymmetric complementarity may be required. A heterotic string is a closed string (or loop) which is a hybrid ('heterotic') of a superstring and a bosonic string. String theories require extra dimensions of spacetime for their mathematical consistency. In bosonic string theory, spacetime is 26-dimensional, while in superstring theory it is 10-dimensional, and in M-theory it's 11-dimensional. In compactification, some of the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions and the compactified dimensions represent internal states such as colour, flavour and charge.

 

A brane generalises the notion of a point particle to higher dimensions. For instance, a point particle is a brane of dimension zero, while a string is a brane of dimension one. In string theory, D-branes are an important class of branes that arise when one considers open strings. As an open string propagates through spacetime, its endpoints are required to lie on a D-brane. The study of D-branes in string theory has led to important results such as the AdS/CFT correspondence (fig 106), which has shed light on many problems in quantum field theory.

 

Fig 104: Two papers (arXiv:1806.08362, 1806.09718) suggest the overwhelming majority of multiverses in this landscape are assigned to the swampland of unviable universes, where dark energy is unstable, also reviving the popularity of time-varying dark energy models such as quintessence. The Calabi-Yau manifold illustrated is just one compactification, which shows a local 2D cross-section of the real 6D manifold known in string theory as the Calabi-Yau quintic.

 

One popular way of deriving realistic physics from string theory is to start with the heterotic theory in ten dimensions and assume that the six extra dimensions of spacetime are shaped like a six-dimensional Calabi–Yau manifold. Such compactifications offer many ways of extracting realistic physics from string theory. Each of these different shapes corresponds to a different possible universe, or "vacuum state", with a different collection of particles and forces. String theory thus has an enormous number of vacuum states, typically estimated to be around 10500, sufficiently diverse to accommodate almost any phenomenon that might be observed at low energies.

 

Laura Mersini-Houghton (Holman, Mersini-Houghton & Takahashi 2008, Hooper 2022) notes how she discovered her multiverse theory that resolves this problem:

 

I began thinking about all this in the early 2000s, around the time that string theory was the leading candidate for a “theory of everything” that unifies gravity with the other three quantum forces to explain our universe. String theory is the idea that nature at a fundamental level is 11 dimensional and particles are actually just the bit we can see of tiny loops of vibrating strings. With string theory, after curling up the extra spatial dimensions to make them sufficiently small to be invisible, you end up with a whole landscape of possible initial energy states, or potential big bang energies, that could start a whole family of different universes. At the time, string theorists thought this was really bad because they were looking to end up with only one universe – one that looked like ours – described by one theory, and they were ending up with a nearly infinite number of universes. But to me it was great news because I needed a fundamental theory to provide that pool of energies that would allow me to ask the question, “why did I start with this one rather than something else?”.

 

I had realised something that seems obvious in hindsight. We know for sure that our universe was very small in its first moments of existence. Therefore, it obeys the laws of quantum physics. What dawned on me specifically was that, based on the wave-particle duality of quantum mechanics, I could think of the universe as a wave function instead of as an object. The wave function is the mathematical entity that encodes quantum probabilities. But you can imagine it as a tree made up of many branches, each of which can produce a universe, and it spreads through the energy valleys of the string theory landscape, from where it takes its big bang energy.

 

You get these branches, these many worlds, but you need to decouple them from each other – you need to break that quantum entanglement. Think about when we separate gold from ore. We put the ore mixture into a bath of a compound called borax, and since borax interacts differently with different minerals, they start separating from each other. In my hypothesis, the string theory landscape was our borax – it broke the entanglement and separated out the many worlds. Somehow, early on, our universe went through a quantum-to-classical transition. It became a classical object where each event is determined with certainty. This could not have been the case unless the branches of the wave function of the universe completely decoupled. All the branches decouple as they are going through cosmic inflation. This is the phase, shortly after the big bang, when the universe went through a period of exponential expansion in size. My proposal was that, if this decoupling did happen, we would be able to see the remnants of it in the cosmic microwave background [or CMB], the radiation left over from those first moments of our infant universe. The idea was that, as the branches decoupled, traces of the entanglement would have been left behind.

 

I made a series of predictions with Richard Holman and Tomo Takahashi in 2005 and 2006. We said we would be able to see signatures of this early entanglement. Our present universe is just a rescaled version of its infant self, with all its “birthmarks” still there. If you think of all these quantum universes as tiny quantum particles, they were all interacting with each other – gravitationally they were pulling on each other, and that left scars in our sky. One prediction was the existence of a giant void or cold spot in the CMB. And such a void [about 900 million light years wide] was found in the observations of the Wilkinson Microwave Anisotropy Probe, a space-based observatory. It was confirmed by the Planck satellite, which also observed the CMB. We were the first to show how you can actually test the multiverse and that you don’t need to go beyond the universe’s observable horizon – you can just see it in our sky.

 

Fig 105: Loop quantum gravity is an alternative to superstring theory. Right: Braided space-time gives an underlying basis for unifying the fundamental particles.

 

Loop quantum gravity (LQG) aims to merge quantum mechanics and general relativity, incorporating matter of the Standard Model into the framework established for the pure quantum gravity case. It is an attempt to develop a quantum theory of gravity based directly on Einstein's geometric formulation rather than the treatment of gravity as a force. The quantum states in the theory do not live inside the space-time. Rather they themselves define spacetime. As a theory LQG postulates that the structure of space and time is composed of finite loops woven into an extremely fine fabric called spin networks. The evolution of a spin network, or spin foam, has a scale above the order of a Planck length, approximately 10−35 meters, and smaller scales are meaningless. Consequently, not just matter, but space-time itself, adopts an atomic structure. A spin network represents a "quantum state" of the gravitational field on a 3-dimensional hyper-surface. A spin foam is a topological structure made out of two-dimensional faces that represent one of the configurations that must be summed to obtain a Feynman description of quantum gravity.

 

When the spin network is tied in a braid, it forms representation of a particle, which can have electric charge and handedness. Some of the different braids match known particles, where a complete twist corresponds to +1/3 or -1/3 unit of electric charge depending on the direction of the twist. Heavier particles are conceived as more complex braids in space-time. The configuration can be stabilised from space-time quantum fluctuations by considering each quantum of space as a bit of quantum information resulting in a kind of quantum computation. There are fundamental issues reconciling LQG with special relativity.

 

Each of these theories lead to serious questions and contradictions about the nature of time itself. String theory finesses space and time into a higher dimensional manifold which is then compactified, Canonical quantum gravity demonstrates that there is no time in general relativity, just stillness. Loop quantum gravity fragments space-time into a spin foam. Even special relativity by the Lorenz transformations, gives both advanced and retarded solutions leading to the hand-shaking between past and future of the transactional interpretation (fig 72).

 

We thus have to come to terms with the way our conscious experience interacts with the physical universe and whether agency can still have meaning in a timeless cosmos.

 

To break time down philosophically into more fundamental constituents as a single series we can think of three stages of abstraction. An A-series ordering orders times in terms of whether they are objectively past, present or future. It is a dynamic ordering constantly updating with the passage of time. A B-series ordering orders times or events in terms of the relations of earlier-than, later-than, and simultaneous-with, so is unchanging. C-series events by contrast, are temporally ordered, but there is no temporal direction within the series i.e. as a betweenness ordering. In a C-series ordering, by contrast, we say only that b is between a and c.

 

Many complex interactions, particularly integral transforms, involving a convolution integral of multiple components are likely to induce pseudo-random statistics. Indeed two entangled particles are able to display correlations violating Bell's inequality while the statistics of each appears random to an observer only measuring one. An example of a many-to-many correspondence suggested by Uziel Awret (2022), as shown in fig 106, is the holographic principle in M-theory or AdS/CFT Correspondence (Maldacena 1998) which gives rise to a duality between a quantum field theory on the "boundary" surface enclosing a region of spacetime, and spacetime geometry in the interior "bulk" anti-deSitter space, corresponding to a general relativity their with a negative cosmological constant. When the CFT has intractably strong interactions the dual AdS has weaker more tractable solutions and vice versa.

 


Fig
106: (a) An illustration of the holographic principle in which physics on the 3D interior of a region, involving gravitational forces represented as strings, is determined by a 2D holographic representation on the boundary in terms of QFT physics of particle interactions. (b) Einstein's field equations can be represented on anti-de Sitter space, a space similar to hyperbolic geometry, where there is an infinite distance from any point to the boundary. Maldacena (1998) discovered a 1-1 correspondence between the gravitational tensor geometry in this space with a conformal quantum field theory like standard particle field theories on the boundary. (c) Entanglement plays a pivotal role because when the entanglement between two regions on the boundary is reduced to zero, the bulk space pinches off and separates into two regions. (d) In an application to cosmology, entanglement on the horizon of black holes may occur if and only if a wormhole in space-time connects their interiors. Einstein and Rosen addressed both worm-holes and the pair-splitting EPR experiment. Juan Maldacena sent colleague Leonard Susskind the cryptic message ER=EPR outlining the root idea that entanglement and worm-holes were different views of the same phenomenon (Maldacena and Susskind 2013). (e) Time may itself be an emergent property of quantum entanglement (Moreva et al. 2013). An external observer (1) sees a fixed correlated state, while an internal observer using one particle of a correlated pair as a clock (2) sees the quantum state evolving through two time measurements using polarization-rotating quartz plates and two beam splitters PBS1 and PBS2.

 

This is an abstraction of the duality we see in optical holograms, between the interference fringes on the 2-D hologram and the reconstitutable 3-D image it was derived from, by additive coherent light ray tracing, encoded by the wave phase, forming a tantalising bijective integral transform. Attention has been drawn to this duality as an oracle to discuss the assumed “binding problem” of how brain processes generate the coherence of subjective experience (Elliot 2019), the panpsychism structural mismatch problem (Chalmers  2016) – why macrophysical structure, e.g. in the brain seems entirely different from the macrophenomenal structure we experience, and Chalmers’ meta problem of consciousness. However, like any strict 2-component dual aspect theory, Awret’s is implicitly epiphenomenalistic and passes off the mismatch from the quantum field theory of the standad model to a dual higher dimensional abstract gravitational theory, no more like the phenomenal subjective experience than the quantum physics we know. In effect it is coopting a purely physical theory invoking an abstract duality to solve intractable string theory problems without explaining why such a dual displays any manifest subjective existence.

 

In the author’s view the underlying complementarity supporting conscious subjectivity in the physical universe is not such a bijective duality, as it provides complementary roles for subjective consciousness to seamlessly resolve uncertainties in the unstable dynamics of edge-of-chaos processes in brain dynamics. These complementary inputs to the ongoing physical state through volition interleaving with brain states in the neural correlate of consciousness are not possible in a bijective duality.

 

Other research also brings this correspondence into focus. When an ice cube melts and attains equilibrium with the liquid, physicists usually say the evolution of the system has ended. But it hasn’t — there is life after heat death. Weird and wonderful things continue to happen at the quantum level. A quantum circuit acts on its basic units of information, qubits, using a standardised repertoire of gates. Some gates perform familiar operations such as addition, while others are quintessentially quantum. A “controlled NOT” gate, for example, can bind together two or more qubits into an indivisible whole, known as an entangled state. Any system composed of discrete units can be recast as a circuit, even a system that looks nothing like a computer. Leonard Susskind and co-workers (Brown & Suskind 2018, Bouland et al. 2019) applied this concept to the hot plasmas that, through the AdS/CFT duality, are equivalent to black holes. He suggested that, even after the plasma reaches a condition of thermal equilibrium, its quantum state does not stop evolving  (Musser 2023) . It becomes ever more complex.

 

Ovchinnikov (2012) has also shown that dynamical systems in path integral formulation can be represented as topological field theories. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously bro- ken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches etc.). Hal Cox has speculated this could help explain field theories of brain dynamics.

 


Fig 106b: (a) A photon may undergo mixing to a pseudoscalar particle - an axion - in an external magnetic field. (b) The magnetic field is a pseudovector field because, when one axis is reflected, as shown, reversing parity, the magnetic field is not reflected, but reversed, because the currents are reversed. The position of the wire and its current are vectors, but the magnetic field B is a pseudovector, as is any vector cross product p=a x b. Any scalar product between a pseudovector and an ordinary vector is a pseudoscalar. A pseudoscalar particle corresponds to a scalar field which is likewise inverted under a change of parity. (d) Many extensions of the Standard Model predict additional massive bosons, beyond the W, Z, and Higgs bosons of the Standard Model. These arise naturally from the CP (charge-parity) violations seen for example in the weak force. They might be scalar (even under both P and CP transformations), pseudo-scalar (odd under both P and CP), or vector particles. The prototype for a pseudo-scalar boson is the axion, which is ultralightwith mass m 1 eV.

 

Dark matter, evidenced only in the clumping and fast rotation rates of galaxies, and Dark energy, associated with the increasing rate of expansion of the universe, are likewise poorly understood. There are four basic Dark matter candidates, axions, machos (non-luminous, small stars, black holes etc), wimps (weakly interacting massive particles which might emerge from extensions of the standard model), and complex dark matter experiencing strong self-interactions, while interacting with normal matter only through gravity. Axions have become particularly interesting as candidates as other types become eliminated by limits on detection (Chadha-Day, Ellis & Marsh 2022, Semertzidis & Youn 2022, Blinov et al. 2022). Interaction with gravitational forces is deemed to be essential to explain the some 10120-fold discrepancy between the cosmological constant and the standard model vacuum energy contribution.

 

In philosophy, to say that a statement is truth-apt is to say that it could be uttered in some context and would then express a true or false proposition. Truth-apt sentences are capable of being true or false, unlike questions or commands. Whether paradoxical sentences, prescriptions (especially moral claims), or attitudes are truth-apt is debated. Temporal error theory is a view that is analogous to moral error theories, which deny the objective reality of moral facts, and, on the basis of this, deny that any moral claims are true. Temporal error theory, then, is the view that temporal thought and discourse is truth apt, and is false (or at least, is not true).

 

Can Cosmic Consciousness, or God Consciousness, Generate the Entire Physical Universe?

 

By contrast with abstract TOEs, or theories of everything, from superstrings to quantum loop gravity, the Vedantic tradition asserts that consciousness itself, in a transcendent cosmic form, is at the very source, generating the physical universe. Similarly, theism asserts that God created the universe. Is this a realisable perspective, or does it introduce even more inconsistency and counterfactuality into the cosmic conundrum, inconsistent with Occam’s razor?

 

Fig 106c: Both God and Brahman, the Vedantic cosmic mind as ultimate reality, carry aspects of the human persona and psyche in their tacit assumptions, meaning they are projections of human organismic experience onto the cosmos, not absolute cosmic realities.

 

An example is Aurobindo’s viewpoint, where he accepts the scientific view of biological evolution but claims a complementary process of conscious involution creating the physical substance of the universe:

 

According to Sri Aurobindo, in conformity with the oldest Indian scriptures, matter was created by the ultimate or supreme consciousness seeking manifestation. In order to manifest, supreme or absolute consciousness, starting from the full fluidity of the spirit, created graded planes of being, till it reached the state of manifesting the solid density of matter. Sri Aurobindo calls this the process of Involution. The involution of consciousness is the process in which the Supreme or Absolute conceals itself more and more by creating planes of consciousness of increased density, in order to create the density needed for physical manifestation. In each of the planes that were created during the involution, (called typal planes or graded worlds) the absolute or divine consciousness veiled itself increasingly till it was able to produce and manifest the various forms of solid matter.

 

We have arrived at this question by facing up to the hard problem of consciousness, that no objective arrangement of material waves or particles, no matter how complex , or integrated can, of themselves generate subjective experience, which accounts for 100% of our knowledge of reality, nor explain how we have efficacy of conscious volition over the universe in our survival-oriented and creative behaviour. But can we reverse the explanatory gap and explain the universe, as some woven or over-laid form of a consciousness, which has no underlying physical process at all? Just as particulate interaction, no matter how complex cannot give rise to the holistic nature of subjective experience, how can consciousness that we conceive of as the “observer” of holistic experience, contort this holistic realm into the particulate universe we perceive and know from detailed experiments is governed by physical laws of nature, as specific and critical to life as the standard model, without introducing even more intractable paradoxes? Neither is posing a neural monistic entity, with neither subjective nor objective manifestation, any form of solution to these profound differences which have to be interactive for subjectively aware organisms to have efficacy of physical volition.

 

For all the seemingly impenetrable difficulties of the explanatory gap, we at least have a good idea of how conscious life emerges in the physical universe, long after the cosmic origin, in a process that requires generation of the chemical elements of life in supernovae and then biogenesis and the evolution of life, reentry of the archaea and bacteria in the eucaryote endosymbiosis, generating full subjective consciousness and then the rise of multicellular animals with biological brains. The very notion of consciousness arises from the alert waking state of a human or animal organism, in notions from "conscious of" our surroundings, consciousness, as opposed to sleep and coma, and self-consciousness as self-aware organismic agency. Many aspects of our subjective experience, including colour vision and spectral sound are derived from brain processing via biological receptor organs for physical modes, rather than the universe itself and hence are evolutionarily adapted and organismic. Even our ideas of self and other are processed in regions like the precuneus and temporo-parietal junction, so that in meditation and psychedelic states, we can experience union with ultimate reality and hold an ongoing notion of God consciousness. But we have no direct experience of the consciousness in others except by their vivacious behaviour and no validly meaningful knowledge of consciousness, as an absolute actuality generating the entire physical universe, except perhaps through our first person visionary experience.

 

The peak experience on psychedelic mushrooms that was the source of this cosmology, was astounding in the way it conveyed disembodied cosmic consciousness in the persona that the Vedantic tradition likens to Brahman, in the merging of individual atman or conscious self with universal cosmic consciousness, in a near-death type experience. But at the same time, it was warning me that Brahman was as vulnerable as I and the biosphere, because without the survival of life, this sate of cosmic reunion, outside our biological incarnations, was as vulnerable as life itself, without surviving climax living beings to experience this transcendent reality.

 

This brings us to a new and different world view, or weltanshauung, via a consciousness filter theory, in which moksha is a living biological state, asymptotic to a form of cosmic consciousness, when the doors of perception are opened wide, so that the constraining filters of perception cease to be confined to the physical awareness of the world and descend deeper into the abyss of what unconditioned climax consciousness can be and become in the unfolding of life. But it doesn't assert primal transcendence of this state of reunion over the physical universe, from which it has arisen as a climax phenomenon. Both the notions of theistic deity in God, founded in intentional conscious agency and of Vedantic doctrine in Brahman, as the ultimate cosmic reality founded in consciousness itself, are thus projections of human organismic agency and conscious existence onto the entire universe and its very origin.

 

While God is seen as transcendent, in being both omniscient and omnipotent, the very notion carries incontrovertible archetypes of anthropomorphic agency. God has all the attributes of human emotions, from compassion, to jealousy, and genocidal anger and violence, as well as all the notions of intentionality we associate with human manufacture, in which living beings are created de novo by God’s shaping in clay, or breathing the breath of life we possess into life as a whole, “And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul” or in rational thought as the doctrinal logos “In the beginning was the Word, and the Word was with God, and the Word was God”, all imbued with the reinforcement of moral cosmologies characteristic of urban societies in the pursuit of ultimate order, construed as in obedience to God’s divine will. This leads in turn to false notions of creationism, where life is construed to be merely a clockwork theistic technology in which "creatures" are created de novo by God, in which Homo sapiens rules over life as a whole:

 

And God said, Let us make man in our image, after our likeness: and let them have dominion

over the fish of the sea, and over the fowl of the air, and over the cattle,

and over all the earth, and over every creeping thing that creepeth upon the earth.

 

But we have also discovered that the conscious complexity and sentience of life arises, not from a rule of ultimate order, but through critical phase transitions at the edge of chaos, giving rise to the evolution of increasing complexity and diversity and the very dynamical brain states characterising organismic consciousness.

 

When we turn to the Vedantic tradition, we can see the same process occurring, in which the rishis, in a sate of deep samadhi conceived that the entire universe was made out of aggregations of divine consciousness, to make the material universe as the demi-God Brahma emerging from the dream of Vishnu out of his navel, overlooked by faithful Lakshmi. This was then made more abstract in Upanishadic notions of Brahman as ultimate reality in the abstract, standing outside life, as a cosmological absolute form of consciousness generating the entire universe. This consciousness is also set apart from experience, as the illusory world of subjective form, or maya, particularly when confined to our organismic perception of the physical world and our egotistical survival in the mortal coil of the round of birth and death. Is it reasonable to separate pure “transcendent” consciousness from all immanent experience, when consciousness is the very ability to have subjective experience and even the most basic physical experiences, let alone dreams, memories and reflections, are essential to the continuity of life as a whole, and are thus NOT illusion?  

 

This is where Symbiotic Existential Cosmology remains counterposed to the original Vedantic tradition, as a remedy just as it is a remedy to theism by returning cosmic agency back to us all as subjects, because it sees cosmic consciousness just as it manifests biologically, as a nascent emergent property of the eucaryote endosymbiosis becoming organismic and human consciousness, realised for example in deep meditation, and the mystical visionary state, that is a climax manifestation of the living universe, whose very existence and ability to manifest is dependent on the existence of life as a whole.

 

Symbiotic Existential Cosmology thus postulates only primal subjectivity as a complementary aspect to quantum reality at the cosmic origin, in a process that remains entirely consistent with physical cosmology and the likely inflationary scenario, in which both the universe and subjectivity emerge as a quantum fluctuation, rather than the Vedantic view of a transcendent super cosmic consciousness that generates the entire universe, that like all theistic cosmologies run down into chaos because of defection against the presumed ultimate order of moral good.

 

Primal subjectivity is conceived of as optimally compliant with the environmental boundary conditions of the filters the biological brain imposes, while giving subjective consciousness a privileged active agency, during quantum uncertain phases in neurodynamic phase transitions central to the survival of the organism and in turn our ability to experience ultimate reality as conscious agents. This transforms from a physicalist view that conscious experience is just a shadow manifestation, a mere internal model of reality generated by physical brain processes, into a filter theory of conscious experience, as being essential to organismic life, as a holistic manifestation of subjective efficacy of volition over the physical universe, to aid and ensure survival of the sentient being, not as maya or illusion, but the biological focus that consciousness must be compliant with, for the organism to survive and reproduce and for life to flower as a whole.

 

We see in later Indian thought a complementarity between Shiva as the timeless cosmic conscious witness of Shakti as the dynamic force of physicality begetting both life and death in creation and destruction, then leading to universal maya or illusion, as the primal pair retreat from intimate coitus, leading to all of us sentient beings lost in the egotism of organismic consciousness, ending ultimately in the conflagration of the Kali Yuga, due to moral and spiritual decline.

 

By contrast, Symbiotic Existential Cosmology doesn’t seek transcendence over nature, in spiritual supremacy, but immanent symbiotic integration with life as a whole, at the edge of chaos, in the natural consummation of the conscious universe, in self-becoming that IS transcendence, realised in the unfolding of life.

 

The Reality of Time and the Notion of Agency

 

In "Out of Time", Baron et al. (2022) explore both the potentially non-existent status of time in current physical cosmology and the folk notions of time that human cultures consciously and socially invoke:

 

The idea that time does not exist is, for many, unthinkable: time must exist. Our goal is to make the absence of time thinkable. Time might not exist. This chapter lays the groundwork for our investigation. We begin by clarifying the central target of our investigation, the folk concept of time and then motivating the idea that it is this folk concept whose investigation matters.  … Our primary focus in this book is on the folk concept, or concepts, of time. … For now, we can think of the folk concept of time as something like the naïve view of time – the unreflective notion of time that individuals use in their everyday lives. … When we talk of the folk concept of time we don’t simply mean the way the folk think about, or conceive of time. We don’t simply mean what the folk think time is like. We mean something like what the folk think (almost certainly implicitly) it would take for there to be time in a world. … Why should we care about the folk concept of time? The short answer is that we should care about time in the folk sense, because it appears to be implicated in normative concepts and practices in which we are deeply invested; concepts and practices like moral and practical responsibility.

 

They note a number of versions of TOEs (theories of everything) attempting to unify gravity and the standard model of physics:

 

A number of approaches to quantum gravity (QG) have been proposed. The best-known approach to QG is string theory, according to which the fundamental components of reality are tiny one-dimensional strings that vibrate in up to eleven dimensions. The chief alternative to string theory is Loop Quantum Gravity (LQG), according to which reality is fundamentally a lattice-like structure, constituted by discrete ‘chunks’ that are ‘woven’ together. Another approach to QG is the Canonical Quantum Gravity program (CQG), which applies standard quantisation techniques from quantum field theory to the gravitational field, in order to produce a quantum account of gravity (roughly in the mould of the standard model of particle physics).  Other approaches include causal set theory, the asymptotic safety approach, the causal triangulation approach and the emergent gravity program.

 

They consider how our philosophical notion of agency can survive the sacrifice of time itself by using causality in a local sense to create the framework for reasoning agency.

 

In what follows we cannot hope to spell out a full-blown realist account of agency without time. That would be a massive undertaking. Instead, we will focus on showing how the realist can respond to the two arguments we mounted earlier: the conceptual and cognitive arguments.

 

Let us start with the conceptual argument. ... The concept of agency at issue is one that focuses on deliberation. As already discussed, we believe it is plausible that for one to be an agent, one must be able to impose a deliberative structure. We are also inclined to accept that in order to do that, one must believe that there is some supportive structure. We nonetheless think the conceptual argument fails. It’s not the case that there must be a C-series for there to be a supportive structure. Thus, one can reasonably believe that there is a supportive structure even if one accepts temporal error theory. There is thus no bar to the imposition of the kind of deliberative structure needed for agency.

 

Consider the idea that we need to take some things to be fixed, and unchangeable, and to be the things on the basis of which we deliberate. Consider also the idea that we take other things to be open, and malleable, and to be things about which we deliberate. Causal structure seems well suited to support both aspects of deliberative structure. There being a causal structure allows us to take as fixed, and unchangeable, those things that are causally prior to our location, and as open and malleable, only those things that are causally downstream from us. We cannot causally intervene on events that are causally prior to us, so they are excellent candidates to be held fixed and unchangeable.

 

This brings us to the cognitive argument. Recall the basic argument: some if not all of our agentive thoughts involve temporal thoughts. Those thoughts are therefore true only if the relevant temporal thoughts are true as well. If temporal error theory is true, then our temporal thoughts are false and so at least some agentive thoughts are false too. Our response to this argument is just to concede the conclusion: yes, some of our agentive thoughts are false. Be that as it may, the falsity of the relevant agentive thoughts does not undermine agency. The reason for this is that there are other nearby agentive thoughts that are true; thoughts that invoke only causal notions.

 

By contrast, and at least in principle, we can causally intervene in events that are causally downstream from us, so these are also excellent candidates to be held open and malleable. Indeed, with respect to those things that are causally downstream from us we can distinguish between those that we take ourselves to be able to causally affect, and those we take ourselves not to be able to causally affect. Only those in the former category are those about which we can deliberate. Those events that are causally prior to us, in the causal ordering, or those that are causally downstream from us, but which we take ourselves to be unable to causally affect, are those things about which we cannot deliberate. So, the presence of causal structure provides a supportive structure for at least these aspects of deliberative structure.

 

However, we have come to see that conscious experience has depths and heights that take it far beyond the confines of rational thought and mental cognition, including emotional states, transcendent mystical states and real world states of acute anticipation as a key feature of consciously surviving immediate existential threats. Baron et al. (2022) thus also make an attempt to discuss temporal non-cognitivism in this context:

 

Temporal non-cognitivism is the view according to which our temporal thought and talk does not involve truth-apt beliefs, but rather, involves some other kind of attitude. So, even if our folk concepts of time are not satisfied, it does not follow that such talk is false because a fortiori it is neither true nor false. … We find temporal non-cognitivism tantalising. We confess, though, that we are not sure how to spell out the view in a plausible manner. Given this, what we say about temporal non-cognitivism will be brief. Our aim is not to argue that there is no way to develop a version of temporal non-cognitivism. Rather, we will gesture towards what we take to be some powerful problems that the non-cognitivist must face if she is to develop such a view.

 

The central problem we see for temporal non-cognitivism is that it is difficult to see what our temporal thoughts might be, if not beliefs, and hence which mental states temporal talk might express, if they are not reporting beliefs. Indeed, we can only really see one way to develop this idea, and that is to take temporal thought and talk to express non-cognitive attitudes of some kind. Call such a view: temporal expressivism. The question then becomes which non-cognitive states the expressivist might identify with our temporal thoughts.

 

There are roughly two kinds of attitudes that one might appeal to: emotive attitudes such as regret, anticipation, nostalgia, fear, anxiety; or evaluative attitudes such as desire or preference. These are natural options to pursue, because they all display a ‘temporal’ asymmetry  and so bear some connection to time already. … Any plausible temporal non-cognitivism will need to provide felicity conditions for temporal thoughts such as ‘it rained yesterday’ and ‘it rained 5 minutes ago’. After all, even the non-cognitivist about temporal thought presumably doesn’t want to say that any temporal thought is as apt as any other. … Even if felicity conditions for some temporal attitudes can be provided along these lines, there is a deeper problem for temporal expressivism. The problem is a paucity of attitudes. Whichever attitudes one chooses—emotive or evaluative—there do not appear to be sufficiently many, or sufficiently fine- grained, non-cognitive attitudes to capture all of the different temporal thoughts we have.

 

In principle, one can have temporal thoughts that are extremely fine-grained, down to the scale of seconds, or nanoseconds. It is implausible, however, that we can place each such fine-grained temporal thought into a one-to-one correspondence with some similarly fine-grained attitude. The main problem is that we can’t differentiate degrees of anticipation to the same extent. Can one really have a greater degree of anticipation for an event in a second, as opposed to one in two seconds? What about nanoseconds? It seems doubtful. Attitudes are just not that finely structured.

 

This position states clear roles for both emotion and anticipation as non-cognitive modes of subjective conscious experience, but the conclusion that anticipation can only exist by degree and that it is too coarse grained to function is inconsistent with the exceedingly fine-grained role of anticipation in existential threats to survival which occur in the conscious moment without any assumptions of the overall nature of time as a global or cosmological entity having any relevance to the immediate survival threat.

 

In this sense, the conscious moment, in both acute existential crisis and in deep mystical states IS timeless and consciousness is itself more generally an ongoing quantum of the present, which does not itself change but rather local conditions change as they pass in and out of our experience. This leads potentially to a deep correspondence between conscious timelessness and cosmological timelessness as complements of one another.

 

Other theories of agency treat it rather as a systems theoretic complementation to the physical universe.

 

Karen Barad (2007) in “Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning”, describes agential realism as a theory in which the universe comprises phenomena which are "the ontological inseparability of intra-acting agencies". "Intra-action", coined by Barad expresses an important challenge to individualist metaphysics. Phenomena or objects do not precede their interaction, rather, 'objects' emerge through particular intra-actions. Thus, apparatuses, which produce phenomena, are not assemblages of humans and nonhumans (as in actor-network theory). Rather, they are the condition of possibility of 'humans' and 'non-humans', not merely as ideational concepts, but in their materiality.

 

Her publication brief expresses it thus:

 

Intra-activity is an inexhaustible dynamism that configures and reconfigures relations of space-time-matter. In explaining intra-activity, Barad reveals questions about how nature and culture interact and change over time to be fundamentally misguided. And she reframes understanding of the nature of scientific and political practices and their “interrelationship.” Thus she pays particular attention to the responsible practice of science, and she emphasizes changes in the understanding of political practices, critically reworking Judith Butler’s influential theory of performativity.

 

I am concerned that this is a philosophical/metaphysical argument which poses a complementarity between quantum physics as an interactive process and human social and political discourse, marginalising the individual subject as a systems cipher. What meaning is there to subjective consciousness as a complement to the objective universe, individual insight, enlightenment, moksha or the cosmic mind, if the complement is based on political discourse, rather than empirical experience? On the other hand this is precisely what Symbiotic Existential Cosmology does in invoking an ‘animistic’ process of mutual affirmation socially to empirically establish subjective consciousness volition has efficacy over the physical universe, before investigating the role fo the physical brain in this process.

 

A second work by Scott Kelso and David Engstrøm (2006) “The Complementary Nature” provides a complementary account of what they describe as “coordination dynamics” (Kelso 2013), within which complementarity is rooted.

 

The division of our world (natural and social) into distinct contraries or opposites has become almost universal practice in most fields of human endeavor and inquiry, including science. Undeniably, imposing such divisions on space and time, wave and particle, order and chaos, action and perception, or organism and environment have enabled significant progress in our scientific understanding of these separate domains. However, understanding based on separation or contraries is fundamentally limited, as is powerfully demonstrated by modern physics, biology and neuroscience. Coordination dynamics describes how coordinated patterns form and transform within and between parts of a given system. Core concepts, developed at length in the book, include self-organization, pattern dynamics, multifunctionality and functional equivalence, and information flow.

 

In the 1960’s, general systems theory sought the goal of a common pattern and process, consonant with Ilya Prigogine’s non-equilibrium thermodynamics of living systems, Hermann Haken’s synergetics about self-organized coherence,  Francisco Varela’s (1972) autopoiesis, and Stuart Kauffman’s (1986, 1993) autocatalysis. This means that coordination dynamics is attempting to treat physical, biological and even social systems on the same systems footing invoking a transformative continuity between processes which are isolated from one another and those in which coordinated interaction is derived from more generalised laws. However, in doing so, coordination dynamics remains an objective description of reality that does not seek to explain the hard problem of the fundamental nature of  subjective consciousness, just the interactive properties of neuronal self-organisation as functional information:

 

Neuronal connections in the brain are an effective medium that nature has provided through the mechanisms of self-organization and natural selection. But the important concept here, as these simple experiments on people reveal, is biologically relevant information. Such functional information can be conveyed by local connections and between distant areas, both by two-way interactions. In this capacity, local~global and integration~segregation are two of the key complementary pairs of coordination dynamics.

 

So how does coordination dynamics explain all these phenomena? And what are the implications of this different view of the brain for the complementary nature? The empirical data suggest that integration (pure coordination) and segregation (no coordination) in both brain and behavior may be viewed as polar, idealized extremes and that the key to understanding lies in the dynamical interplay of integration~segregation tendencies. As a candidate science of the complementary nature, coordination dynamics must minimally be able to explain both polarized complementary aspects and all that falls in between them.

 

The book cites a series of complements and a general relation:

 

Heterogeneity~Homogeneity, Oscillation~Rhythm and Coupling~Uncoupling and summarises them in a General process equation  , where the rate of change of the coordination variable, cv, is a function f of three basic factors: the momentary value of the coordination variable cv itself; one or more control parameters cp; and noisy fluctuations, F. Recall that coordination variables capture the functional nature of behavioral patterns. Such coordination variables can be quite abstract in that they can characterize coordination among the same and different kinds of elements and processes (homogeneous~heterogeneous).

 

Key here is the fact that these capture only the functional nature of behavioural patterns, not subjective consciousness. This approach is similar to that of autopoiesis (Maturana & Varela 1972) in that it provides a complementary objective description of physical and mental phenomena as complementary views of reality described by systems analysis without explaining the hard problem of subjective conscious experience.

 

Scott Kelso’s work has also become the basis of more wide ranging explorations. Brian Josephson (2019) in “The Physics of Mind and Thought” utilises biosemiotics to provide a possible explanation of the quantum measurement problem:

 

Regular physics is unsatisfactory in that it fails to take into consideration phenomena relating to mind and meaning, whereas on the other side of the cultural divide such constructs have been studied in detail. … On the other side of the cultural divide, there is the fundamental idea of a sign, the study of which originated in the semiotic concepts developed in the nineteenth century by Charles Sanders Peirce, more recently taken up by biologists, thereby founding the subject of biosemiotics (Hoffmeyer 2008 a, b). Semiotics emphasises the role of interpretation, a process connecting signs with corresponding objects, or more generally mediation, a process involving situations where a third entity influences the relationship between two others. Such mechanisms play a key role in biology, and one that is essential for effective biological function.

 

Sign use, as originally argued by Peirce and subsequently developed by Deacon (1998) and by Favareau (2015), is of three types, iconic, indexical, and symbolic. The first two types of sign involve entities in the immediate environment, but symbolic use, which use appears to be confined to human beings, can involve manipulations concerning entities absent from the immediate environment, which faculty is attributed by Deacon to a human ability to avoid being too involved in the current situation during mental activity. This can be understood in terms of memory mechanisms that can, as it were, hold on to signs so as to be able to act systematically with them, and thereby develop ‘games’ such as mathematics.

 

The basic problem with quantum mechanics is that a person’s decision as to what aspect of nature to observe can have real consequences, and it is unclear how such mental activity can be integrated with traditional physics; we cannot simply leave out the observer. A thesis in the above has been that semiotics (sign theory) will play a central role in such a future integrated physics, a basic task for such a future physics being that of bridging the gap between signs and the phenomena addressed by current physics, thereby arriving at an integrated point of view. A similar situation arises in conventional science, where a gap of this kind exists between fundamental physics and biology, one that can be bridged taking due account of a succession of levels, utilising a range of specialised approaches to deal with these.

 

Josephson (2019) in "The Physics of Mind and Thought" explores using biosemiotics to integrate physical cosmology with mind and meaning:

 

Regular physics is unsatisfactory in that it fails to take into consideration phenomena relating to mind and meaning, whereas on the other side of the cultural divide such constructs have been studied in detail. This paper discusses a possible synthesis of the two perspectives. Crucial is the way systems realising mental function can develop step by step on the basis of the scaffolding mechanisms of Hoffmeyer (2008 a, b), in a way that can be clarified by consideration of the phenomenon of language. Taking into account such constructs, aspects of which are apparent even with simple systems such as acoustically excited water, as with cymatics, potentially opens up a window into a world of mentality excluded from conventional physics as a result of the primary focus of the latter on the matter-like aspect of reality.

 

Josephson (2021) seeks to extend the complementation of quantum physics with socio-political systems of Barad, to invert the usual “theory of everything” approach to particle cosmology by treating it as an expression of goal seeking agents. He notes that both Bohm’s notion of the implicate order and mind-like processes have been invoked:

 

In the following a more explicit picture is proposed, based on the existence of parallels between spontaneously fluctuating equilibrium states and life processes. Focus on the processes of natural language suggests a picture involving an evolving ensemble of experts, each with its own goals but nevertheless acting in harmony with each other. The details of how such an ensemble might function and evolve can translate into aspects of the world of fundamental physics such as symmetry and symmetry breaking, and can be expected to be the source of explicit models. This picture differs from that of regular physics in that goal-directedness has an important role to play, contrasting with that of the conventional view which implies a meaningless universe.

 

Josephson & Majumdar (2021) extend this, invoking Wheeler’s notion that physical reality may be manifest through participating conscious observers through quantum measurement to use nonlinear dynamics, taking into account biological factors:

 

The constraint that structures that develop should have biological value is shown to be able to account naturally for many features of the quantum domain, thus providing an alternative paradigm to the conventional ‘theory of everything’ one, which has over time become problematic. For the future, detailed investigation of nonlinear dynamics along the lines discussed here is likely to be more fruitful in regard to the problem of understanding nature than continuing current attempts to tweak 'theories of everything' to fit. 

 

Majumdar & Josephson (2020) formulate physics using system dynamics of self-selected fluctuations and correlations in a fundamental field:

 

Instead of the traditional reductionist method of looking at phenomena in nature, we look at how the interplay of symmetry breaking and entanglement of subsystems within this unified field leads to entropic complexification – which appears as the fundamental interactions and particles. This complexification optimizes system configurations to facilitate energy dissipation. The fluctuations in the background field encode laws for phenomena based on the stability, recurrence and patterns of these fluctuations.

 

Chris Nunn:  Go down the timelessnessroute and you end up with some picture like Julian Barbours path. The answer to this paradox could perhaps be that temporal extension is (proto)consciousness,

 

Chris King: I have two complementary views of conscious experience that overlap, just like the relativistic and quantum descriptions.

 

One is temporal and is our reckless journey through the multiverse, looking through our blind eye, as Nelson did at the signal to retreat in the Battle of Copenhagen, creating history for good or ill as we speak. Thats the quantum consciousness IS uncertainty vision.

 

The other is eternal relativistic timelessness, in which everything that we have been and will be is laid out, looking from sideways on in space-time from alpha to Omega, as a thing already achievedas Maria Sabina said of her mushroom experiences. This doesn't mean everything is determined or teleological, but it's the view after all the transactional handshaking has taken place, of the great trip we and everyone took, so we can look on it from far off, perceiving as the compassionate, eternal consciousness we actually are. But Omega, as I have said is not the end point. The consummation of existence is not Omega but the interactive Sigma – Paradise on the cosmic equator in space time.

 

Chris Nunn:  But isn’t ‘consciousnessa correlate at least of what removes uncertainty. The two concepts seem to lie at opposite poles of the same truth and it looks like we need some happy mean. And an idea of temporo-spatial evolution can provide one – ‘threadsof consciousness-associated time getting woven into ever more elaborate forms.

 

Chris King: I do like your woven threads of consciousness a lot and yes interactive consciousness is just like that.  Consciousness is here to mitigate, mediate, or utilise uncertainty as the need may be. However uncertainty is not the opposite of consciousness, but an inside out view of the same thing.

 

It also depends on our frame of reference. If we are looking at brain states, if there is no uncertainty in our brain state then we are a zombie, or a robot. There is no role for subjective consciousness at all. So from the physical perspective the brain state needs to enter an uncertain state for subjective consciousness to be able to intervene and affect outcomes. From the subjective perspective this is the opposite, because we cease being zombies and become conscious agents, removing uncertainty from the world as we act by creating real history.

 

But again centrally the uncertain states that are seriously going to threaten our survival in the wild are things like a snake strike. This is all caught up in the uncertainty of the way circumstances play out. We can try to do "good huntingand take the path we think the snake is less likely to be on, but the snake is going to play likewise to hunt me and take the path that I would least expect. These environmental problems are (a) computationally intractable and (b) quantum idiosyncratic, motivated by live volition and sheer coincidence. The same is true in modern society with other humans and traffic and hurricanes and diseases all playing wildcard elements.

 

So there is a deep correspondence between quantum uncertainty on all scales and conscious anticipation. Nelson knew he was looking through his blind eye and that it would change the course of history. It did. But the terrible acid test is that this looks like consciousness is staring right into the quanglement and fishing for a collapse state by some kind of transactional hand-shaking between past and future to anticipate the cat remaining alive.

 

Yes there are situations where we execute a well designed plan and things go like clockwork, but even in these situations there are multiple sequential non-IID situations about just how we handle a process and the uncertainties that inevitably lead to uncertainty. The brain has worked out how to do this seamlessly because operating at the edge of chaos allows the uncertain regime and the determinate boundary conditions of consciousness to coexist in the dynamical system and for bifurcations into and out of uncertainty.

 

The real evolutionary victory of the conscious brain happened at the eucaryote endosymbiosis when the excitable membrane became a sensory and social communication dynamic in self-feedback. This is where we can look for the secret of this existential anticipation dynamic because the brain is just a very advanced society of social amoebo-flagellates locked in psychic" symbiosis through brain resonances, so to speak. The result is that we do not understand uncertainty or what its deep link to us is. We think of it as randomness but all randomness ultimately is a secondary product of quantum uncertainty and it's obvious that a multiplicity of entanglements lead to a vast network of correlations. The randomness only appears as we approach the classical Born rule situation by travelling down an IID sequence.  So the paradox becomes uncertainty = free will.

 

Chris Nunn:  There are two separate, albeit inter-related, issues here. One is neural deterministic chaos with its pseudo-random outcomes and unpredictability in anything other than the very short term, plus its capacity for harbouring strange attractors (usually termed memories!). The other is the huge question, albeit one thats often ignored, as to whether the entire quanglementrealm is similarly constituted. There are major empirical problems with supposing that quantum EM fields dont embody pseudo-randomness and associated universal structure, though ones constrained by the Higgs field may be more localised (the relevant experiments havent been done yet).

 

Chris King:  This is the absolute nub. We can argue about all the other details, but here is the jewel in the lotus. Tuneable chaotic neurodynamics as Freeman emphasised is not classical deterministic chaos because it's the dynamics of quantum structures, so the use of edge-of-chaos in neurodynamics is not deterministic dynamical chaos, as in a mathematical system like the logistic iteration or the Lorenz flow. These are classical ideals and are not realistic.

 

The key is the point you are making about whether, for example phase coherence in distinguishing conscious states from subconscious, that Pribram emphasised, is more than just an analogy with quantum measurement and is something fundamental about quantum measurement and entanglement. I think it is and it is because the superficial evidence is that the brain is deriving an anticipatory advantage in environmental uncertainty from using phase angle coherence sampling at the edge-of-dynamical chaos. The only way this makes sense is that its advantageous to survival as a form of quantum anticipation, despite trading off potentially noisy disruption of classical causality.

 

Neurodynamics is a non IID-quantum process whose contexts are continually changing, so no approach to the Born interpretation occurs, so the whole thing needs to be treated quantum theoretically, which is beyond the scope of current science. The neuroscience area is still stuck in the classical paradigm, so you refer to deterministic chaos but it isnt classical. We just simply haven't caught up conceptually with the entangled quantum universe we are consciously existing in. The fallacy lies in looking for isolated phenomena like the ion channel where we can be convinced we are at the quantum level and accepting only those as quantum effects and treating all other aspects of neurodynamics as classical by default.

 

My take is that Pribrams analogy is an actuality. This idea of quantum coherence measurement is reinforced by the fact that individual action potentials are phase correlated with the local continuous field potentials in a hand-shaking resonance, so the EEG is not just an artefact of acton potential averages, but a genuine wave function. Thus the brain can seriously be looked at as a massively parallel quantum measurement process, where EEG modes are analogous to, or actually are, quantum excitons”.

 

Chris Nunn:  I like to think that the conversion process can be viewed as the endowment of abstractions with temporal extension. You might object that one could equally well say spatial extensionbut I dont think that would be correct because spatial extension (i.e. position) is already present in the abstraction(i.e. the wave function), whereas temporal position isn’t.

 

Chris King: I agree about temporal extension, but I would add that the wave function, if you are referring to the quantum wave function, is both spatially and temporally extended, as in the Feynman and transactional descriptions, and vastly temporally extended into both the past and future to put it mildly.

 

Chris Nunn: Theres much confusion about free will. It may best be regarded as the capacity of volitional consciousness to influence behaviour and its own future content, leaving out questions about the determinants of volitional consciousness. I agree that it is not fully determined by purely neural constraints, though social constraints on it are very  significant. To a large extent true freedommay depend on an interplay between neural and social constraints, perhaps with some input from both (pseudo) randomness and universal quanglement’ structure.

 

Chris King: Absolutely!

 

Chris Nunn: I think of Freeman neurodynamics in terms of landscapesin classical dynamic state space. True Freeman (and Vitiello) tried to apply the maths of QFT to these but, so far as I know, this was mainly because of the need to cope with almost infinite dimensionality. The advantage of staying classical is that it provides a ready explanation of a wide range of properties, especially sleep which is all about smoothing out excessive landscape ruttingalong with fitting new attractors into existing landscapes.

 

Chris King: You are absolutely right here. This is the way he did it in great detail using differential equations modelling excitatory and inhibitory feedback, generating phenomena of classical chaos and it is beautifully informative of transitions from high energy chaos into existing, or even new attractors from learning.

 

The complications starts to set in when we go from the classical model to actual EEG potentials, where we are measuring a classical signal of discretely sampled voltages in a time series, which we conceive of as continuous potential variations in the scalp, or more deeply on the cortical surface. Qasim et al. (2021) have shown that action potentials are phase responsive to the local continuous field potential which people in turn believe is a tissue average of the effects of the same action potentials, so we have a particular kind of discrete-continuous feedback here which has the complementary characteristics we see in wave function reduction to the particle. Im not saying they are the same, but that they both have a discrete continuous complementarity based on phase which is exactly what the uncertainty principle says in terms of counting wave beats to determine time uncertainty of the energy.

 

And without saying ion channels are the only quantum process, individual action potentials are only a step of scale away from the ion channel because threshold tuning in the neuron is a form of self-organised criticality that in threshold can make the action potential sensitive to the ion channel.

 

Chris Nunn: Any influence of quantum coherence on these landscape features is likely to depend on structured modification of classical dynamic timings, according to this picture.

 

Chris King: So this is where I think the classical description you are advancing starts to break down. We don't actually have a classical picture, we have a classical model and a biological picture and the biological picture is a fractal process with scale handshaking. Moreover isolating the ion channel as the only plausible quantum level is wrong because the whole picture is one fractal reverberating process which is as far from IID-convergence to the classical as living thermodynamics is from equilibrium.

 

So the answer to the above is that it is wrong to think of the process on one level as classical in a way we can conceive of e.g. thinking of the neuron in terms of classical differential equations and then assume its classical and say any quantum effects are going to have to prove themselves and possibly complicate the classical process. The reality is the whole thing evolved as a reverberating quantum system and things on a larger scale that we think of as classical are actually inflatedquantum processes throughout, particularly given the butterfly effect, because the failure to converge to the classical means everything is not causally localised in the way we are anticipating.

 

This is a very hard call for experimental and theoretical science because, if non-IID processes do formally fail to converge to the classical overall, we haven't even started to embark on the scientific journey of how to make sense of this. This is why I think proving classical causal brain closure is scientifically impossible, so that the simple affirmation of subjective conscious physical volition carries far more analytical power and evidential weight and overturns physical materialism at the outset.

 

Chris Nunn: The way Id see this is that phase coherence, whether classical or quantum, mediates attachment of memory-dependent meaning to information. The meaning is represented in terms of ever evolving topological structures, ones that can usefully be thought of as woven tapestries, which become conscious if temporal looping allows memories of the occurrence of the meanings. Any anticipation’ (‘Bayesian brain’ manifestations) would usually be a function of indirect, long term classical loopings usually referred to as learning’. ‘Quantum anticipationswould have a secondary role, dependent on effects of entanglement structureon classical neural coherence timings, that could be responsible for a range of ‘psi’ phenomena.

 

Chris King: All these things may be true as well, but the difference with edge-of-chaos quantum dynamics is that all these processes you are describing are now boundary condition filters on root uncertainty, rather than root causalities, so depending on how robust they are in a given context they can dominate overall in some situations and are manifestly incomplete in others.

 

Chris Nunn: Agreed, though we do need to retain conceptual distinctions for many purposes. Like the weather out there, lots of neural dynamics are mainly classical. They may be a lot more sensitive than the weather to any influences of quanglement structure, but its still a limited influence.

 

Chris King: I think we need to replace mainly classicalwith formative classical boundary conditions on uncertainty”.

 

Chris Nunn: Agreed theres a lot to be said for musical analogies, but its a music that we experience synaesthetically - as Scriabin, for instance, saw shapes in sounds. Your view of EEG modes has a lot in common with Johnjoe McFadden’s ‘cemi-fieldtheory (McFadden 2020). I prefer to think of them as creating form in the structure of time – and time, of course, is a constraint on quantum particles – not itself a wave function derivable particle’.

 

Chris King: Thats very complicated. The way I enter the nierika on mushrooms is a synaesthesic resonance between rushing chirping sounds and the visual patterns before me and then as I’m “listeningto it all, it expands into vistas that I can see in the corner of my eye with exquisite observational detail, and then if I let go, I fall into the synaesthesia and then thats down the moksha rabbit hole until I come catapulting out at some later point and realise what the hell just happened to me?and recollect the full impact of the journey I have taken. While this IS a musical analogy it's also a cosmological reality, just like the particle trails in the LHC are. Whatever time is in this picture is still in the process of being elucidated, as the paradoxes of quantum gravity intimate.

 

Chris Nunn: Wouldnt it be more correct to say that wave functions dont own inherent spatio-temporal extension, except universally, and via incorporation of a potentiality for defining spatial localisations?

 

Chris King: This is really deep and complicated. I really wanted to finish with the transactional interpretation, so here it is. Both the Feynman description of virtual particle exchanges in QED and the transactional interpretation of real particle exchanges involve special relativistic hand-shaking between past emitters and future absorbers. Looked at from the transactional viewpoint, before we open the cats box there is a web of past-future handshaking transactions. Somehow, as we open the box, this web is decomposed in some kind of non-linear interaction and as we look inside, there is one emitter and one absorber and if the latter was in the geiger counter, the cat is no more. But this process doesnt go from past to present to future. It's transversal to elapsing time, which is why I called it trans-causalityin 1985 when I wrote Dual-time Supercausality”.

 

This is the jewel in the lotus that is yet to teach us where quasi-randomness and randomness both meet their match in the partially collapsing entanglement. The key point is it spans space-time as we know it and the process has paradoxical constraints. It has to look quasi-random in terms of the Born interpretation at one end, but it has to manifest precise correlations between each of the entangled components at the other. That is the second lesson after we sing the musical hymns to reality and take another long journey into the unknown!

 

Chris Nunn: Looks like a very nice concept resonance! The only additional point Id like to make is that  Qasim et als finding probably relates to the largest scale of a (pseudo) fractal structure of brain dynamics in which the smallest scales relate to neuronal contributions to dendritic plexi, intermediate scales to astrocytic domainsand largest scales to EEG fields generally.  Anyhow, I suppose next step is how to test all this in a way that has some chance of convincing neuropsychologists generally. The fate of Daryl Bem and others pre-sponse’ (‘pre-sentiment) findings shows this isnt easy. An example of something that might work better is that our ideas (along with John Jay Kineman’s ‘holonicconcept) would seem to predict something very like Rupert Sheldrakes (1981) formative causation, at least in relation to learning by brains and possibly generally. Shouldnt be too hard to convincingly test that if someone would provide the resources. What do you think? Any other suggestions?

 

Chris King: I have an issue with Sheldrake's morphic resonance (formative causation). This to me is causally what RNA would be like if the polymerisation were exothermic i.e. it would become an irreversible clunker. I had a conversation with Rupert in the 1980s and tried to point out that the leading edge issue is creative insight, not morphic resonance. Its a space-time problem to which anticipation is key not just spatial replication. Its getting the first instance of an idea that is formative, not replication ad nauseam. The idea that this replicates afterwards is not explaining the creative source and it leads to a teleological fait accompli. Thus I think the notion is a hopeful evolutionary monster.

 

Chris Nunn: Thats one way of looking at it. Id see formative causationas analogous to neural learning but embodied in a quanglement landscape, not a neural dynamic state space one. Just as with neural learning, the relevant quanglement feature can be expected to deepenwith repetition. The assumption here is that there is feedback from classical dynamics to quantum field structurebut this is implicit in all our models Yours, Johns, mine and possibly Penrose’s).

 

Chris King: Thats a good insight! I agree basically. I spent the day going over Penrose and Hameroff which I do think is off the beam, so thats how I stated my summary, but it made me think again about Penrose's space-time blistering idea and yes he is also saying uncertainty is not random at least in his confined context to the microtubules so it does fit with a kind of quanglement learning process, in particular the idea that at least some forms of entanglement are non-random and might be connected to mind states.

 

The Diverse States of Subjective Consciousness

 

A key issue in this discussion, particularly in regard to the materialist emphasis of neuroscience, is the fact that there are diverse states of conscious experience that extend far beyond and deeper into our awareness than the states of everyday experience of the world around us. Those who spend virtually all their living existence only relating to the everyday waking state can easily slip into identifying subjective consciousness with merely being an internal model of physical reality around them and then succumb to mind-brain identity, or exclusively materialistic conceptions of subjective consciousness as a brain mechanism.

 

This belies the complexity and depth of the problem of consciousness because there are a variety of states of non-ordinary consciousness, which are neither imaginary, nor simply hallucinatory, nor are they just random uncoordinated phenomena, but are perceived as veridical experiences having the same, or qualitatively similar reality value, to our experiences of the world around us.

 

Fig 107: The sheer diversity of conscious mental states need exploration in their own right

without a priori assumptions.

 

These include:

 

(a) Dreams particularly the richer end of the REM spectrum of states, including lucid and prescient dreams.

 

(b) Psychedelic experiences, which ‘reveal’ internal realities, from kaleidoscopic geometric visions to whole scenes and encounters, generally of a radically different nature to those of dreams, meditation on its own and ordinary consciousness. These constitute the traditional sacramental routes to entheogenic realisation, or moksha.

 

(c)  Dissociative experiences such as induced in different ways by salvinorin-A and the anaesthetic ketamine, which differ from psychedelics in both their mechanism and their experiential character.

 

(d) Meditative and contemplative states of sensory withdrawal and ego loss. These are the traditional non-sacramental routes to mystical union, moksha and satori, which complement the sacramental routes in (b).

 

(e) Psychic and parapsychological experiences, including (1) precognition, (2)  prescience & deja vu, (3) uncanny unexplained coincidences, (4) telepathy, and (5) psychokinesis.

 

(f)  Near death experiences NDEs (Grayson & Flynn 1984, Groff 1988, Borjigin et al 2013, Parnia et al. 2023), which may accompany cardiac arrest, anaesthesia and non-life-threatening states; and out of body experiences OBEs , (Monroe 1989), where a person on the border of sleep, an NDE, or in a non-ordinary conscious state observes their own body from without, in a disembodied state.

 

Moksha [29] means release from the cycle of birth and death in this life  in the sense of experiencing and realising the conscious cosmic unity that transcends the physically mortal condition.

 

Dreaming states tend to be the most fully fledged experiences of non-ordinary reality human being expereince although ephemeral in that they can only be recalled during waking. They are also veridical sentient experiences that are highly unpredictable, creative, and although they may reflect recent experiences may also hint at future experiences that come true later, hinting a a deep time-spanning reality underlying existential consciousness. This appears to go far beyond the cited functional roles of REM and deep sleep in memory. Lucid dreaming states, although difficult to initiate and unstable either to immediate awakening by activating the reticular activating system or subject to a false awakening and reversion to the normal REM state, can enable the conscious observer to experience an often very exotic mental state in real time by recognising they are dreaming during the dream state.

 

Psychedelic and dissociative experiences are non-ordinary waking states that, by contrast with dreaming sleep, can be experienced and interrogated consciously.  We have already discussed psychedelic experiences extensively throughout this work. Psychedelic states have particularly striking attributes, especially when combined with meditative withdrawal, characterised by “ultimate reality” and types of abstract perceptual experience having intense reality value radically different form any other kinds of experience. These are not hallucinations and their nature needs a great deal of further subjective exploration to begin to fully fathom.

 

Meditative experiences, although more controlled and tending more to mindfulness and compassion likewise stand in the Eastern tradition as a central route to union with cosmic reality exemplified in the union of Brahman and the atman.

 

Parapsychological experiences, which are the orphans of materialistic science, are perceived as common place by many individuals and surveys indicate a degree of statistical support (Cardena 2018, Wardell 2019, Giroldini 1991). Some of these, such as (1) and (2)  may be associated with deep properties of the quantum universe, hinted at in the Feynman formalism, weak quantum measurement and the transactional interpretation.

 

These diverse forms of subjective experience present a deep and complex domain of subjective discovery, which has barely been tapped and emphasises the inadequacy of assuming a physically materialistic view, which diminishes the authenticity of everyday experience to be merely an epiphenomenal internal brain model and assigns all forms of non-ordinary conscious experience to delusory, imaginary, confabulated, stochastic, or hallucinatory status.

 

The psychedelic psychotherapist Stanislav Groff (1988) gives an insight into this wider “geography” of non-ordinary conscious states surrounding the realms if traditional perennial philosophy, in his notion of the territories of holotropy surrounding those of traditional religious cosmologies:

 

The hylotropic, or matter-oriented, mode of consciousness is the term I am using for the normal, everyday experience of consensus reality. The holotropic mode of consciousness, or consciousness aiming toward  wholeness and totality of existence, characterizes certain nonordinary psychological states, such as meditative, mystical, or psychedelic experiences. It can also be observed in many spontaneously occurring episodes referred to as psychotic by contemporary psychiatry.

 

In the hylotropic mode of consciousness, we experience only a limited and specific segment of the phenomenal world or consensus reality from one moment to another. The nature and scope of this experiential fragment of reality is quite unambiguously defined by our spatial and temporal coordinates in the phenomenal world, the anatomical and physiological limitations of our sensory organs, and the physical characteristics of the environment.

 

In the holotropic mode of consciousness, it is possible to reach, in addition, all the remaining aspects of existence. These include not only access to one's biological, psychological, social, racial, and spiritual history and the past, present, and future of the entire phenomenal world, but access to many other levels and domains of reality described by the great mystical traditions of the world. Comparative study of mystical literature shows that most of these systems seem to agree on a complex, layered, and hierarchical model of reality that includes phenomenal as well as transphenomenal aspects of existence.

 

The gross experiential realm reflects the world of ordinary waking consciousness and consensus reality based on the evidence of the sensory organs. The corresponding worldview and way of being in the world is limited to information derived from the physical body and the material world, to linear causality as the only connecting principle, and to Newtonian understanding of space and time. Many systems of perennial philosophy have identified and explored, in addition, several transphenomenal levels or realms of existence, usually referred to as subtle, causal, and ultimate or absolute.

 

Both the subtle and the causal levels can be further subdivided into lower and higher. The lower subtle, or astral-psychic, level contains traditionally out-of-body experiences, astral travel, occult and psychic phenomena (precognition, clairvoyance, psychokinesis), auras, and similar experiences. The higher subtle level comprises archetypal forms of deities, supreme presences and spiritual guides, experiences of divine inspiration, visions of light, and audible illuminations.

 

The lower causal level is the realm of savikalpa samadhi, the final God, creator of all the realms, the audible light of bija mantra the source of all individual deities. The higher causal realm is characterized by ultimate transcendence and release into boundless radiance, or nirvikalpa samadhi. On this level, there is no subject or object, no self or god, only formless consciousness as such. On the level of the Absolute, consciousness awakens to its original condition and suchness, which is also suchness of all of existence gross, subtle, and causal.

 

Savikalpa samadhi is sometimes described as the state in which the yogi's consciousness temporarily dissolves into Brahman. Depending on the yogic tradition, savikalpa is either the second or third highest level of samadhi.

 

The observations from modern consciousness research with or without psychedelic drugs bring, in general, strong supportive evidence for this understanding of reality. However, in specific details, the cartography of consciousness found in perennial philosophy would have to be extended and modified to fit the findings of experimental psychiatry and the new experiential psychotherapies. In the following text, I will attempt to outline a classification of transpersonal experiences that is based on the scheme of perennial philosophy, but incorporates, at the same time, the findings of modern scientific research.

 

To create a transpersonal taxonomy that would reflect in an accurate and comprehensive way the introspective data and objective observations from modern consciousness research is not an easy task. The spectrum of transpersonal experiences is not only extremely rich, ramified, and variegated, but includes levels of reality governed by laws and principles that are different from those that rule ordinary reality. Many transpersonal experiences, being ineffable, elude adequate verbal description and occur on levels of reality where those very aspects that could ordinarily serve as principia divisionis, such as time, space, duality, and polarity, or linear causality, are transcended. The problem is further complicated by the holographic nature of consciousness and mutual interpenetration of its different levels and domains.

 

This realm extends to both a unification with all life and with the ultimate reality of the Cosmos:

 

Transpersonal experiences which involve transcendence of spatial barriers suggest that the boundaries between the individual and the rest of the universe are not fixed and absolute. Under special circumstances, it is possible to identify experientially with anything in the universe, including the entire cosmos itself. Here belong the experiences of merging with another person into a state of dual unity or assuming another person's identity, of tuning into the consciousness of a specific group of people, or of expansion of one's consciousness to such an extent that it seems to encompass all of humanity. In a similar way, one can transcend the limits of the specifically human experience and identify with the consciousness of animals, plants, or even inorganic objects and processes. In the extremes, it is possible to experience the consciousness of the entire biosphere, of our planet, or of the entire material universe.

 

In some rare instances, an individual in the holotropic mode can have the experience of consciousness expanding to such an extent that it encompasses the totality of life on this planet, including all of humanity and the entire fauna and flora, from viruses and unicellular organisms to highly differentiated species. Instead of the ordinary identification with one living organism, this experience represents identification with life as a cosmic phenomenon.

 

Individuals experiencing identification with Cosmic Consciousness have the feeling of encompassing the totality of existence and reaching the Reality underlying all realities. They sense beyond any doubt that they are in connection with the supreme and ultimate principle of Being. This principle is the only real mystery; once its existence is accepted, everything else can be explained from it and understood. The illusions of matter, space, and time, as well as an infinite number of other forms and levels of reality, have been completely transcended and reduced to this one mysterious principle as their common source and denominator.

 

This experience is boundless, unfathomable, and ineffable. Verbal communication and the very symbolic structure of our language appear to be a ridiculously inadequate means to capture it and convey its qualities. Our phenomenal world and everything that we experience in the ordinary states of consciousness fades away in the light of this supreme awareness as limited, illusory, and idiosyncratic aspects of this one Reality. This principle is clearly beyond any rational comprehension, and yet even a short experiential exposure to it satisfies all the subjects intellectual and philosophical craving. All the questions that have ever been asked seem to be answered, or there is no need to ask any questions at all. .

 

This emphasises the position already expressed in the introduction that, while brain states may be a necessary condition for human conscious experiences, they are not a sufficient condition and the primacy of the conscious condition introduces a cosmological paradox. All our experience of the world is exclusively through our conscious perception of it and yet we are bound to accept the reality of the external universe for our own conscious survival as biological organisms and to negotiate the affairs of the world, but we simply don't know the existential status of the universe is actually able to manifest in the absence of our conscious experience of it and the fact that we dont is critical to good cosmological design. So experiential reality including the source of our scientific description of it starts with consciousness and ends up with the universe, while biogenesis starts with cosmic symmetry-breaking and ends up with conscious organisms. That’s the existential paradox.

 

Fig 108: Wheeler’s universe as a self-excited circuit. The universe starts out right as

the thin black line, then it grows and in time ives rise to ‘observer-participancy’,

which in turn imparts ‘tangible reality’ to even the earliest days of the universe.

 

John Archibald Wheeler (1983) quotes as follows:

 

"An old legend describes a dialog between Abraham and Jehovah. Jehovah chides Abraham "You would not even exist if it were not for me!", who replies "Yes Lord that I know, but also You would not be known if it were not for me. In our time the participants in the dialog have changed . They are the universe and man”. “How does quantum mechanics differ from what Bishop George Berkeley told us two centuries ago?:”  "Esse est percipi" to be is to be perceived.

 

Insight, Agency, Uncertainty, Creativity, Karma, and Volition: Subjective Consciousness as a Quantum Climax

 

The following is an exploratory description based only on circumstantial evidence, to explore how subjective consciousness, through the environmental boundary conditions imposed by the neurodynamically active brain, may bring about the climax conscious cosmology of the sentient universe as we know and experience it. It is purely speculative, but is a test, if not a proof of principle about what we as subjectively conscious sentient beings, with autonomy of volitional will over our decision-making behaviour in the physical universe, are actually here doing.

 

Notwithstanding the apparent contradiction between quantum reality, occurring within space-time, and gravity, which  is not renormalisable like the other forces, where, in general relativity, the curvature of space-time is functionally coupled to gravitation, quantum reality nevertheless appears to be a fundamental transition in cosmology that is not just confined to the vector boson – colour, weak and electromagnetic forces. For example superstring theory is couched in terms of supersymmetry, in which each integral spin boson is pared to a half integral spin fermion and vice versa, balancing the negative vacuum contribution of the fermions against the positive contribution of the bosons, accompanied by string quantum excitations at the Planck scale. It thus becomes critical to understand the role of quantum reality in biological and particularly neurodynamic processes associated with subjective consciousness.

 

Fig 108a1: Cosmic symmetry-breaking and molecular fractality. Cosmic symmetry-breaking results in the four forces of nature, in which quarks bond in colour triplets to form protons and neutrons, to form atomic nuclei, with orbital electrons, forming atoms, which then bond in a succession of strong covalent and ionic and them cooperative weaker H-bonds and polarity bifurcations to form fractal molecular structures extending from individual molecules to tissues and organs.

 

Biogenesis and the evolution of complex life is clearly a consequence of the symmetry-breaking of the forces of nature. The generation of molecular matter and particularly organics in supernovae is generated by the tiered interaction of the strong and weak nuclear forces and electromagnetism under the collective influence of gravity to form some 100 atomic nuclei, atoms with orbital electrons and molecules.

 

This interaction supplied be free energy of solar radiation on the negentropic planetary surface, results in far from equilibrium thermodynamics giving rise to a complex edge-of-chaos dynamic molecular landscape in a phase transition, where cycles of anabolic polymerisation and catabolic denaturing become highly sensitive to varying "initial" conditions determined by the particular molecular concentrations of organic molecules forming prebiotic precursors of proteins and nucleic acids, as discussed in the fractal biocosmology section. The sheer diversity of organics in the Murchison chondrite, fig 53, illustrates this. This also requires polynucleotides to be marginally unstable thermodynamically, to avoid irreversible polymerisation, just as RNA is, thus requiring a sustained phase transition state to achieve replication.

 

These are not just structurally fractal as shown in the biocosmology section, but have fully fractal dynamics, as manifested in biological organisms. The unlimited emergent potential of such regimes has been vastly underestimated by the exclusive focus on attempting to find the “magic formula” that will immediately generate working polynucleotide and polypeptide systems as a mechanically defined system, rather than understanding the ecologically  emergent potential of such systems as a whole.

 

Molecular systems, as experimentally explored and detected in experiments and in carbonaceous chondrites and our knowledge of molecular processes, quantum tunnelling in enzymes and superposition in protein folding indicate that such molecular systems involve quantum processs and constitute vastly more complex forms of three dimensional quantum spatio-temporal computing automata than the classical discrete examples of 1D and 2D cellular automata (CAs) familiar to mathematics. As I wrote this, q-bits for quantum computing have been generated on single molecules (Castelvecchi 2023).

 

Such CAs can be analysed in terms of chaos and order based on the proportion of live and dead cells and some of those at the edge of chaos such as the 1D system 110 and John Horton Conway’s 2D game of life take the form of universal computers where any Turing compatible process can be represented as a set of initial conditions which can then be computed by the CA's universal rules. In a similar but vastly more complex quantum computing phenomenon free molecular interaction in molecular populations can give rise to living organisms in which genetically coded proteins can in turn generate excitable cells, and brains which are not only the most complex computational objects in the known universe, but are also subjectively conscious. CAs such as the game of life can generate both a computer CPU as a computational illustration, but also form free-living digital ecologies, in which initial conditions give rise systems of potentially unbounded digital complexity.

 

Molecular systems the early Earth, exceed the classical dynamics of the game of life in three ways: (1) They are three dimensional not two. (2) They are quantum systems exploiting both superposition and entanglement. (3) They are continuously redefining the rules of engagement because the molecules themselves are defining the "field" dynamics through their autocatalytic interactions on the early Earth, rich with organics in a variety of far-from equilibrium contexts, from lost city undersea vents to ponds and shorelines with heightened magnetic interactions prone to chiral selectivity provide a wide variety of geological contexts to form an "open set" of initial conditions generating a continuously evolving ecology of arbitrarily complex molecular progenotes as cooperatively interactive, far-from-equilibrium edge-of -chaos systems, under sustained phase-transition.

 

Fig 108a2: The game of life consists of s 2-D rectangular field of cells updated by the rule that any cell with 2 or 3 live neighbours remains alive while one surrounded by 3 live cells also becomes alive. This results in a critical dynamic at the edge of discrete chaos where any grid of cells has the capacity to form a universal computer and/or reach unlimited scales of complexity. These are illustrated (a) in a simulation of a computational CPU, (b) in terms of a series of self-propagating organismoids, (c) by a disordered partially symmetric assembly generating a series of gliders, with fractal details inset in (c2) and (d) by a large glider acting as a generator for an ensuing  slip-stream of clouds of persistently active cells.

 

What is needed is a dedicated experimental program like the LHC, in the form of an organic molecular reactor that can generate phase transition systems with a variety of controllable molecular inputs, accompanied by free energy sources e.g. of simulated stellar radiation, spark discharge, lost city vent type chemical interfaces and magneto-chiral evaporating shorelines that can be sampled on a real time basis by automated gas chromatography and mass spectrometry as a tuned evolving system.

 

The laser, superconductivity and superfluidity are examples of highly coherent quantum systems, whose effects are evident at the macroscopic scale. The laser also demonstrates macroscopic multi-particle entanglement at ambient temperatures. The classical electromagnetic field also exhibits macroscopic quantum coherence. The most obvious example is the carrier signals for radio and TV, which satisfy Glauber's quantum description of coherence. Quantum coherence has been shown to be equivalent to quantum entanglement in the sense that coherence can be faithfully described as entanglement, and conversely that each entanglement measure corresponds to a coherence measure (Tan & Jeong 2018). A common misconception of biological processes is that they are fundamentally classical in nature and that the law of mass action will prohibit quantum phenomena which will become thermodynamically overwhelmed by interactions which will destroy any quantum coherence at the warm wet ambient biological temperatures of 25oC, but this is an unjustified conclusion, as we shall see.

 

Biology is rich with phenomena at the quantum level, which are essential to biological function. Protein folding is a manifestation of quantum computation. The protein folding problem is solved biologically through each molecule being effectively in a superposition of states which can use wave tunnelling to reach an optimal active conformation. Enzymes likewise invoke quantum tunnelling to enable transitions through their substrate's activation energy barrier to form the products, thus accelerating the reaction rate by orders of magnitude. Research confirms that the conscious brain can detect single photons and is able to amplify sensitivity given previous threshold input (Tinsley et al. 2016, Takeshita et al. 2017).

 

Quantum entanglement has been claimed to be behind the way migrating birds navigate in the magnetic field (Giachello et al. 2016, Paul et al. 2017, Hiscock et al. 2016, Gunther et al. 2018). When a spinach photosynthetic active centre absorbs a photon, it has also been suggested that the excitation's wave function performs a spatially parallel quantum computation, which enables the excitation to travel down the most efficient route to reach the chemical reaction site (Hildner et al. 2013, Romero et al. 2014), however photon absorption in photosynthetic bacteria involves pairs of excited proteins which enter into a superimposed vibrational state, in which one or the other becomes activated (Maiuri et al. 2018, Thyrhaug et al 2018, Duan et al. 2022) with excitons having ultra short time scales.

 

Danah Zohar (1990) in “The Quantum Self” has also drawn attention to ways in which quantum phenomena could arise in the brain, including the coherence states proposed by Herbert Frölich (1968, 1977, 1983, 1988) which have received some experimental support (Katona et al. 2015), particularly in systems driven far from equilibrium (Zhang et. al 2019). Catecholaminergic Neuron Electron Transport (CNET) is a hypothesised neural signalling mechanism in catecholaminergic neurons that would use quantum mechanical electron transport. Electron tunnelling occurs in ferritin, an iron storage protein that is prevalent in those neurons. The hypothesised function of this mechanism is to assist in action selection. The mechanism itself would be capable of integrating millions of cognitive and sensory neural signals using a physical mechanism associated with strong electron-electron interactions (Rourke 2019, 2020). Fisher (2015) has also speculated that phosphorus and in particular Ca9(PO4)6 calcium phosphate Posner's clusters might play an integral role in neural processing using quantum coherence. Estimates of coherence time vary from 105 seconds (ibid) down to 60 sec (Player & Hore 2018), but may be inconsistent with evidence from Ca++ dynamics not correlating with consciousness (Chen et al. 2020).  We have also seen specific quantum theories of consciousness such as those of Hameroff & Penrose (2014) and Freeman and Vitielo’s (2016) dissipative quantum model of brain dynamics, based on attractor dynamics and the  interaction of quantum fields with water and the cytosol.

 

Symbiotic Existential Cosmology’s approach focuses on edge of chaos dynamics as a transition to quantum sensitivity, through “excitons” distinguished from one another by tuneable wave phase coherence between continuous field voltages and neuronal action potentials, complemented by neuronal thresholds and ion channel activations as a process driven by self-organised criticality, leading to quantum sensitivity. None of these need to operate in an atmosphere of complete isolation from quantum decoherence in the “warm wet brain”. They derive their basis from the corresponding excitations in single celled eucaryotes so the key step is the sensitivity of edge of chaos excitation in the single cell to quantum level changes e.g. in the ion channel, associated with sensing external quantum phenomena, from light (photons), sound and vibration (phonons), and electromagnetic disturbances.

 

The emergence of organismic consciousness is an evolutionary process. The brain inherited the dynamics to make our form of subjective consciousness possible long before multi-celled organisms evolved. LECA, the founding single-celled eucaryote already possessed the G-protein linked receptors found in the brain, and going even deeper -- LUCA the last common ancestor of all life, possessed a chemiosmotic excitable membrane, enabling chaotic sensitivity in the butterfly effect during bursting and beating. In the context of animal brains, these in turn, lead to phase-front processing, which forms a representation of the same dynamics involved in quantum measurement, bringing quantum entanglement into the dynamics, driven by self-organised criticality at tipping points, running from individual ion channels to whole brain dynamics, in a coherent system reaching a complexity unique in the universe.

 

To gain a view of the potential for emergent quantum phenomena, even in apparently simple monoatomic environments, we now turn to liquid helium 3 as an oracle for the possibilities that arise in an ultra-cold environment where there a few interfering thermodynamic interactions.


Fig 108b: Diverse properties of superfluidity and quasi-particles in liquid helium. (a) Helium becomes liquid at 3-4 K and at much lower temperatures enable the coherent waves of superfluidity due to Bose-Einstein condensation. Superfluid phases in 4He (2.17 K) and 3He (1mK), once Cooper-pairs have been formed. (b) The quasi-particle experiment showing both reflected “holes” and the shadow of transmitted particles on permitted orbits (Noble et al. 2022, Eltsov 2022).  (c) Quantum turbulence in rotated superfluid He consists of lattice of quantised vortices linked in vortical sheets (centre), turning with the cylinder walls (Lounasmaa & Thuneberg 1999). (d) X-rays reveal tornado-like vortices swirling in a droplet of 4He (Cho 2015) (e) Mini-bang experiment modelling the origin of the universe in superfluid 3He (Ruutu et al. 1996). (f) High-mass solitons generated in superfluid 3He (Yefsah et al. 2013)

 

3He has a fermionic nucleus of two protons and a neutron, and two orbital electrons, thus becomes a fermionic superfluid, that displays a variety of unstable quantum phenomena illustrated in fig 108b, including quasi-particle states, quantum turbulence, high mass solitons and cosmic string like defects. Superfluid 3He consists of Cooper-pairs of atoms entangled in the same wave function, just as are the electrons in solid-state superconductivity, with residual unpaired complexes of atoms  forming quasiparticles. This, unlike 4He which becomes a bosonic superfluid, generating Bose-Einstein condensates, results in a situation akin to spin glasses (Oppermann & Rosenow 1996) possessing devil's staircase fractality (Çaglar & Berker 2017), in which some quasi-particle states in the medium can be forbidden to enter, if they come to close to an obstructing superfluid tangle resulting in reflection of a quasi-particle “hole”. This phenomenon was clearly demonstrated by immersing a micro-experiment in which a source of quasi-particles generated by disturbing the medium can be either reflected off a superfluid tangle caused by a wire loop if they pass too close, otherwise passing on to the photographic plate, showing the shadow of the particles that didn’t become deflected (Noble et al. 2022, Eltsov 2022).

 

Fig 108c: Optimality of H2O demonstrated in its varieties of interaction: (a) Dissociation into acidic and basic ions. (b) Hydrogen bonding into the oxygen lone pair. (c) Geometry of the H-bond, (d) Water's high boiling point is a function of H-bonding in which 85% of molecules at room temperature are caught in a bonding lattice. (e) the H-bonded crystal structure of ice, which is both less dense and more ordered than liquid water, as reflected in liquid water extreme specific heat (right). (f) Polar bonding of water molecules with positive and negative ions results in larger hydrated ionic radii, with naked and hydrated ionic radii shown centre. (g) Water structures surrounding the proteins, nucleic acids and phospholipids is pivotal in their biological structures, involving an interplay between polar properties of water and non-polar moieties of amino acids and nucleotide bases to form protein micelles, nucleic acid helices and the membrane structures of the living cell. (h) Many-body representation up to the four-body interaction, benchmarked for the isomers of the water hexamer of the Prism, Book, and Cage isomers (Yu et al. 2022).  (i) Many anomalous properties of liquid water originate from its well-connected hydrogen bond network, including unusually efficient vibrational energy redistribution and relaxation. Shown is a transient hydrogen bond contraction of roughly 0.04 Å, followed by a thermalisation (Yang et al. 2020).  (j) Water's unique bonding structure is also reflected in its high boiling point.

 

We now transfer our focus to the biological realm of ambient thermodynamic temperatures around 25oC and the unique properties of liquid water, in relation to biological and neurodynamic processes. It is clear that these temperatures do not support the relative entropic isolation of ultra cold-state physics, but may nevertheless invoke unique quantum properties of their own. It should be noted that the fact that the most utilised high temperature superconductor Bi2Sr2Ca2Cu3O10 (BSCCO), becomes superconducting at 110oK, well above the 77oK of liquid nitrogen and far above the 4.15oK for mercury, and that significantly higher temperatures have been claimed, demonstrates that entangled pairs can remain robust at much higher temperatures than absolute zero, showing that nascent possibilities may also exist for quantum processes in neurodynamics at biological temperatures.

 

Differential electronegativity in the first row elements CNO induces a symmetry-breaking between the non-polar C-H bond and the increasingly polar N-H and O-H (fig 108c(j)). This results in phase bifurcation, dividing the environmental medium into polar (aqueous) and non-polar (oily) phases in association with low-entropy water bonding structures induced around non-polar molecules. This is directly responsible for the development a variety of structures, from the membrane in the context of lipid molecules, to globular protein enzymes and nucleic acid base-stacking.

 

Water is one of the most important, yet least understood, liquids in nature (Yang et al. 2021). It has the highest heat capacity of all liquids, with a specific heat capacity of 4.186 Kj/g°C at 20 °C, with ice at -1 °C, 2.093 compared with kerosene 2.01, copper 0.385, mercury 0.140 and proteins (lysozyme 1.483). This reflects the very large number of H2O quantum modes that can become thermally excited. Liquid water consists of a dynamic weak bonding structure, in which around 82% of the molecules at any time are bonded in locally solid configurations (Yu et al. 2022). It also contains holes and multiple liquid phases (Huang et al. 2009, Nilsson & Pettersson 2015).

 

Critical in this process are the optimal properties of H2O among all molecules, as a polar H-bonding and ionising medium, making possible in turn polarity interactions, aqueous acid-base bifurcation, ionic solubility and hydrogen bonding. The optimal nature of water as a hydride is illustrated in boiling points (fig 108c(j)). Water provides several other secondary bifurcations besides polarity. The dissociation 2H2O ↔ H3O+ + OH lays the foundation for the acid-base bifurcation, while ionic solubility generates anion-cation. Many key properties of proteins and nucleic acids, are derived from water bonding structures, in which a counterpoint of H-bonding and phase bifurcation effects occur, determining the form of the alpha helix and nucleotide base pairing. Hydrophilic-non-polar bifurcation is central to the tertiary structures of globular proteins as 'micelles' and helices and hairpins of RNAs and DNA. The solubility, or otherwise, of a variety of molecules and ions is derived from the energies and entropies of their induced water-bonding structures. Polymerisation of nucleotides, amino-acids and sugars all involve dehydration elimination of H2O, giving water a central role in polymer formation. It has also been suggested water contains quantum-coherent domains, in association with boundaries such as macromolecules and membranes.

 

Complementing this is the fact that membrane excitability and neurodynamics involves movement of ions and electron and ion transport, both generating and responding to electromagnetic fields. H+, Na+, K+ and Cl are all fermionic ions since they have even numbers of neutrons and orbital electrons (once one electron escapes or is caught to form the ion) and an odd number of protons, enabling both bosonic excitations and fermionic effects.

 

If we now turn to tissues such as the cerebral cortex, fig 51 shows that biological tissues are the most complex form of matter known in the universe, with diverse electrochemical and biochemical properties and hand-shaking interaction on all fractal scales, from the molecular level, through molecular complexes such as proteins and nucleic acids, to cell organelles from the ribosome to the membrane, endoplasmic reticulum and nucleus to organs and organisms. These structures are complemented in nervous systems by patterns and pathways of electromagnetic excitation from individual action potential to tissue spanning waveforms in electrical potential in a state of mutual interaction. Pivotal to these are phase tuning between action potentials and tissue potential wave phase noted in fig 78(4), and transitions in and out of chaos in edge-of-chaos dynamics, noted in fig 78(1) and fig 35(right).

 

Fig 108d: (1) Quantum suppression of chaos in wave function scarring by a classical repelling orbit. (2) Quantum kicked top has a chaotic regime resulting in entanglement between orbital activation and nuclear spin. (3) Proteins have a funnel-shaped energy landscape with many high- energy, unfolded structures and only a few low-energy, folded structures. Folding occurs via parallel microscopic trajectories via quantum tunnelling. (4) Bird magnetic navigation is mediated by quantum mechanical, long-lived spin coherences in realistic models of cryptochrome can providing the necessary precision (Hiscock et al. 2016). (5) The close to founding eucaryote Naegleria gruberi has both amoeboid and flagellated modes, contains actin and microtubule cytoskeletons, mitotic and meiotic machinery, suggesting cryptic sex, several transcription factors and a rich repertoire of signalling molecules, including G-protein coupled receptors, histidine kinases and second messengers including cAMP.  Mitotic microtubules in blue and flagellar in yellow (Velle et al. 2022). (6) A 3D volumetric view of a pancreatic beta cell obtained using soft x-ray tomography, with the distribution of insulin granules (yellow), mitochondria (pink) and the nucleus (blue) highlighted. The boxed regions point to structural details of these regions obtained using cryo-electron tomography illustrating the fractal complexity of eucaryote cell organelles (White et al. 2020). (7) Transitions in an out of chaos in brain dynamics. (8, 9) Phase coupling between action potentials and tissue continuous potential waves realised in the EEG have homology to quantum measurement through wave beats.

 

Phase tuning represents a form of potential quantum measurement of the brain’s own excitations noted in fig 71(c) where the uncertainty of time is related to energy through frequency of wave beats against a reference wave. Cerebral neurodynamics has access to this, in view of action potential emission locked to the continuous tissue potential wave phase in a way which is not available to physicists who lack a quantum reference wave. We have already seen that classical radio waves emitted by transmitter towers satisfy the conditions of quantum coherence so this relationship is potentially not just an analogy.  Pivotal here is the relationship between decoherent excitations regarded as local processing and the more coherent global excitations associated with conscious brain states. This is in turn coupled to edge of chaos brain dynamics, which in self-organised critical states leads to whole excitable neurosystems becoming potentially sensitive to quantum instabilities, which due to the fractal architecture and dynamics of neurosystems, result in hand-shaking from the level of the ion channel through individual neurons to whole brain states during phase transitions from chaos to order, in decision-making, insight and perception. In this process the conscious state is linkable to the greatest complex of coherent oscillation in the brain dynamic, interacting through tuned degrees of entanglement with other partially decoherent dynamics, which may become part of the central conscious flow through phase modulation. This quantum dynamics at the edge of chaos may have anticipatory properties essential for survival, by forming a wave function surrounding the present, incorporating the immediate past and future in the emergent flow of conscious experience, avoiding the snake's strike in the immediate moment.

 

When we look back at quantum chaos in fig 57(1), we can see that the quantum suppression of chaos in closed systems is breached in the quantum kicked top when two quantum systems (orbital energetics and nuclear spin) have mutual coupling. This leads to complex interactive processes in neural tissues, which have climax fractal complexity in that the systems concerned consist of overlapping coherent waveforms, that like Cooper-pairs can display resistance to simple thermally based decoherence and may rather couple to specific interactive modes defined by the precise context of a given excitation, limiting the ways decoherence can intervene and resulting in more structured forms of quantum interaction, akin to the properties of helium 3 quasi-particles, solitons and cosmic string phenomena. This combined with the fact that brain processes do not involve IID (independent identically-distributed) measurements convergent to the Born probability interpretation, due to their continually changing contexts, means that, even at 25oC, the conventional neurodynamics in the conscious brain may well be fully quantum in nature as it stands, and not require special quantum avenues, such as Hameroff and Penrose (2014).

  

The universe is thus a sentient cosmos, evolving in such a way as to make possible a fully intimate connection between subjective and objective aspects of reality, in which existential "isness" and objective behaviour reach a consummation in the sentient brain in conscious perception and volitional will. Subjective consciousness can then intentionally affect the physical universe, just as we experience it to do, by collapsing the wave function of the "multiverse" of quantum probabilities into the line, or course, of cosmic history we experience, through Schrödinger's cat paradox. This is what we experience as conscious decision-making, affecting the world around us, for better, or worse.  This in turn means that, as a consummating manifestation of cosmic sentience, we have a personal and collective responsibility to fulfil this emergent quest and to preserve and unfold the diversity of life in the universe and this is what the meaning and purpose of life is all about. It is we ourselves, the universe's sentient beings, who form the consummating manifestation of the interactive catastrophe set off by symmetry-breaking in the cosmic origin and it is we who must needs protect and unfold the diversity of life and consciousness, so that the process can reach ultimate fulfilment. The buck stops with us and the fate of life within the universe becomes our personal responsibility to protect.

 

This cosmology, by its structural design presents a unique form of anthropic cosmology. The anthropic cosmological principle states that our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers, given that we could only exist in a type of universe capable of developing and sustaining sentient life. The strong form says the universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. Proponents of the anthropic principle argue that it explains why this universe has the age and the fundamental physical constants necessary to accommodate conscious life, since if either had been different, we would not have been around to make observations. Anthropic reasoning is often used to deal with the notion that the universe seems to be fine tuned in terms of the relative strengths and interactions of the four forces, in such a way as to make conscious observers possible.

 

The phrase "anthropic principle" first appeared in Brandon Carter’s (1973) contribution to a symposium honouring Copernicus's 500th birthday. He articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. "Although our situation is not necessarily central, it is inevitably privileged to some extent.” He defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics. In effect the symbiotic cosmology “inverts” the Copernican principle by stating climax consciousness provides the ultimate privilege, by enabling the universe as a whole to become conscious.

 

Fig 109: Paradise on the cosmic equator. According to the anthropic principle, existence of conscious observers is a boundary condition. Conscious life can exist only once cosmic evolution has generated the elements of life and evolution has then a chance to reach an organismic climax.

 

The weak anthropic principle (WAP) states that the universe's ostensible fine tuning is the result of selection bias (specifically survivorship bias). Most often such arguments draw upon some notion of the multiverse for there to be a statistical population of universes to select from. However, a single vast universe is sufficient for most forms of the WAP that do not specifically deal with fine tuning. The strong anthropic principle (SAP), as proposed by Barrow & Tipler (1988), states that the universe is in some sense compelled to eventually have conscious and sapient life emerge within it. The Wheeler participatory anthropic principle (PAP) states that only universes with a capacity for consciousness can exist. Wheeler states: “We are participators in bringing into being not only the near and here but the far away and long ago.

 

On Qualia

 

Symbiotic Existential Cosmology avoids attempting to describe qualia as an analytical issue because, although it proposes primal subjectivity, it avoids any kind of formal descriptive protopsychism as any kind of structured description, as a cosmological misconception.

 

The physicalist viewpoint says we have trichromatic colour vision because there are three photoreceptors and the qualia form a three dimensional colour space as part of the neurodynamic analysis of photons of a single spectrum of frequencies which we don't see, but only, like a cat paradox experiment, sample as particles through rhodopsins. This means that the neurosystems have evolved to construct a 3D colour space to represent the incoming excitations. A few people have tetra-chromatic colour vision and some, particularly women who have two X-chromosomes and so can have two alleles for say the green receptor, claim to have scintillating colour vision and become artists, but they have no means to give expression to a four-dimensional colour space. Like the whisker barrels of the mouse and our dependence on orchestration of major neurotransmitter types like serotonin, GABA and glutamate to organise embryonic cortical connectivity, three colour vision space may be genetically based.

 

Symbiotic Existential Cosmology changes this picture, so that intentional subjectivity plays a central role in the subjective conscious volitional responses of subjective conscious intentionality changing the physical universe through our behaviour to support our survival. The brain then acts as an environmental boundary condition filter on this interactive process, so that when we hear a hiss in the dappled forest canopy, our subjective consciousness integrates all this incoming sensory “information” into the flow of experience, partly from memory, but also from the “anticipatory quantum of the conscious present”. We anticipate a snake strike and maybe look to see the source but anyway jump out of the way and survive.

 

So the Symbiotic Existential Cosmology view is subtly different from physicalism because intentionality is subjective but the palette of qualia is still driven by the filters of neurodynamic environmentally adapted processes, rather than pan proto-psychic entities in their own right, which are like a form of pseudo-physicalist mentalism.

 

When we take psychedelics, this all begins to make more sense because consciousness becomes synaesthesic, so sounds and colours intermingle in a single consciously perceivable mode, in which visual and auditory senses make an interplay of sonorous kaleidoscopic visions that can open out into whole dream-like visions accompanied by vast vistas and exotic scents. I remember tripping and smelling the overwhelming aroma of fried chicken, because the candle in the room was smoking, so we turn all these into something that is recognisable.

 

Memory is not the defining determinant of conscious experience. The stream of consciousness just uses brain-derived memories to form a contextual backdrop for actively anticipating environmental uncertainties through subjective experience and applying volition over the physical universe to make hay while the sun shines.

 

Sound is a time-evolving one dimensional frequency space taking the form of a wavelet transform that can become symphonic. Sight is a three dimensional spatial plus three dimensional colour space with other attributes like 3D edge detection and object reification thrown into the mix. All of these look and feel like the adaptive components of an internal model of reality constructed by our neurosystem. But then enters olfaction, where we have around 150 receptors and dogs have hundreds to 1000, forming the diffuse multidimensional experience we experience intuitively as delicate scents to overpowering odours. Benedictine liqueur is a good example, with about 22 different intensely olfactory molecules, each with their own receptor binding, so the neurosystems that analyse this are impossible to classify as red or blue in any structured way, yet we have a clear capacity to sense odours and scents.

 

Symbiotic Existential Cosmology says that subjectivity has to be optimally compliant to these physical boundary conditions imposed by the quantum physics of the environment in terms of orbital bonding, phononic resonances in the cochlea and photon sampling in the retina, otherwise it begins to impose its own "psychic" constraints, which will inhibit biospheric adaption and organismic survival and cause potential causal conflict with brain processing.

 

Symbiotic Existential Cosmology’s approach to this is that physical boundary filters induce a complex adaptive subjective response, which we envision as the flow of subjective experience of reality, as a flow of environmentally adapted “hallucinations” that we all depend on, but are part of a much larger conceivable universe of which William Blake said:

  

“If the doors of perception were cleansed, everything would appear to man as it is infinite”.

 

Again when Blake says:

 

"We are led to believe a lie when we see with and not through the eye that was born in a night to perish in a night while the soul slept in beams of light"

 

He is not saying "Reality is not what you see but that which sees" as Deepak Chopra put it – that consciousness IS reality, or makes reality on its own, although it IS primary, but that it's a lie when we restrict ourselves to perceiving only the physical world, ignoring the visionary state. Seeing with is perception, seeing through is vision. The "soul" i.e consciousness can also see the light from within. The key is to look within, through our visionary "vision" not just assume consciousness is reality. We need to actually make the visionary journey across the Styx, not just stare at the ferry boat.

 

So I see qualia as the residue of what we get when the infinity of the unconstrained “mind at large” is anchored down by the constraints of neurosystem adaption, not determined by the brain's neurosystems, but constrained by the boundary conditions of the reducing valve of physical perception. We have to deal with qualia as part of the vision quest and go right into it all, not try to analyse how it happens, but just go into the Nierika and see what the journey unfolds.

 

Multiple Minds and One World: Why SEC is ICAM

 

In the veridical way existential reality is generated, subjective experience is primary – 100% of our knowledge of the world around us, and of our visionary inner landscapes of awareness, are derived through our subjective conscious experiences of reality. The world around us is then realised as a sensory subcategory of these experiences, through the consensual overlap of our subjective perceptions of external realty, through which  we gain a common experience of the physical world. We then interpret these as “physical existence” in a universe containing biological brains, which, like ourselves, appear also to be able to have private inner subjective experiences although we can’t directly perceive  them. We may also, in meditative and entheogenic states, converge to an inner reality which we experience in the stream of consciousness, unbound by sensory constraints, in what appears to be a unified ‘cosmic consciousness’.

 

By contrast, the attempted construction of reality from the physical universe and its brains remains incomplete because there is no means of any explanation, or even any idea of what such an explanation would be like, concerning how the brains can also have private, integrated, wholistic subjective conscious experiences – the hard problem of consciousness research. Pure materialism thus completely fails to explain, let alone manifest, how or why we have an inner conscious life at all, and worse still, completely denies our volitional ability to mentally affect the physical world around us, when we are intentional conscious agents having to have intentionality to ensure our survival in real time.

 

Fig 109b: Multiple subjective holisitic conscious volitional minds converge to a consensus mental view (centre), of the physical world around us, involving a single divisible engulfing universe (background). In the physical view there are multiple objective brains, but each invokes a unique private conscious experience, evident only to the subject.

Notwithstanding  the notion of a probability multiverse in some quantum interpretations, we experience only one world at large, in which matter is subdividable and decomposable, but we are a multitude of integrated wholistic subjective conscious minds, that can’t see or communicate with one another’s consciousness directly, but only through our vivacious volitional behaviour in the physical world, although telepathy may be an adventitious, if ephemeral, alternative. Our minds, rather than experiencing physical reality, in and of itself, which consists of colourless, odourless quanta possessing no sensory features but only energy, wavelength position and inertia, perceive reality subjectively only through an internally generated  model of reality providing the idealised ‘hallucinatory’ interior mental view we call “being conscious of our subjective experience”, which extends beyond physicality, although filtered through neurobiological sensory processes such as three colour retinal vision, Fourier frequency representations of sound, multidimensional receptor olfaction and internal somatosensory ‘feelings’,  of love and lust and longing, in a sentient ‘world’ of colour, sound and other subjective qualia which we cannot subdivide, but which flow through us continually in an unbroken whole. The key to all this is volition over the physical world around us – that we are subjective conscious beings possessing intentional will to act and survive as biological organisms in the physical universe .

 

The relationship is not, and cannot be, a bijective duality between a single world and multiple minds, unifying the ‘inner’ conscious and ‘outer’ physical views, because there is not a simple bijective duality as a whole between the subjective and objective aspects.  Strict dual aspect monism frameworks, such as Awret (2022) and Vimal (2022), cannot describe veridical reality as it is, even when they involve asymmetric dualities such as the ADS-CFT holographic principle, or a strict bijection between individual conscious experiences and their neural physical bases,. They freeze the subject-world relationship in a non-interactive bijective correspondence that doesn’t adequately explain why multiple minds exist in a single universe, or how subjective conscious volition occurs veridically over physical reality, because the subjective is confined to a ghost in the machine of the brain, forced in lock-step with physical processes, making them existentially equivalent to identity theory, little different from the unconscious zombie that manifests no subjectivity at all in a pure physicalist description, despite hypothetically replicating a “brain’s physical processes”.

 

Although Awret’s use of the holographic principle is a potent oracle for the complementary  natures of subjective and objective, there is no evidence subjective experience is more like an abstract gravitational theory than the quantum universe we know. It is then ensnared in an eliminativist denial of of volition, inconsistent with subjective empiricism and with survival of the subjectively conscious organism, in a way which cannot be directly falsified as claimed:

 

I am an eliminativist on free will. This position is easy to defend. I know that I have the ability to change my mind and privately so but lets not exaggerate. You seem to suggest that the indeterminism associated by the collapse provides room for consciousness to 'bias' the collapse in certain instances. The good news is its falsifiable, the bad news is that it is ridiculous.

 

The claim to bias is untenable because there is no evidence subjective volition violates the Born probability interpretation yet theories, from transactional super-causality to super-determinism, could provide more subtle correlations or even causal interactions below the measurable level. Awret, in personal correspondence, contrasts his view with Sean Carroll (2020), as allowing for a more versatile relationship, enlarging isomorphism to an asymmetric bijective duality, but at the cost of free-will:

 

Consciousness exists. It's real. I talk about it all the time. I asked this question to people like Sam Harris, who don't believe in free will. Have you never tried to make a difficult decision? Do you really just say, well, let the laws of physics do what they're going to do? Of course. At the level where we talk about people, we attribute choices and volition to them like we attribute conscious experiences to them. And I'm all in favor of that. I think that they exist. They're real. They should be treated as such. That doesn't provide any evidence at all that they are not isomorphic to physical things going on in the universe.

 

However, the more versatile dual aspect framework of the Pauli-Jung conjecture (Atmanspacher 2012) may do so because it suggests induced, occasional, evasive mind-matter correlations, in which  the physical is not completely independent of the mental, and both are not independent from the assumed reality underlying them, as exemplified by by Jung's notion of synchronicity. This idea stems from the Unus Mundus – or “one world” of existential experience of the mediaeval philosophers that goes back to the legend of Plato’s Cave. In the Pauli-Jung conjecture, the physical state is holisitic before measurement, and the subjective aspect involves the unconscious emergent into consciousness, together with archetypes being open to synchronicity, or "meaningful coincidence", being made possible by the fact that both the observer and connected phenomena ultimately stem from the same source, the unus mundus. Pauli in particular talks about the “autonomous activity of the soul” as “something objectively psychical that cannot and should not be explained by material causes. The room left in the versatile account allows for correlations below the level of measurable events. A hidden variable theory which determines the outcome of collapse in terms of the complex interactions of the cosmic wave function could provide just such a correlation with subjectively experienced environmental uncertainties, themselves having a critical impact on organismic survival.

 

Symbiotic Existential Cosmology, which likewise accepts as axiomatic primal subjectivity, in which subjective conscious volition has efficacy over the physical universe, is thus classified as ICAM – interactive complementary aspect monism because there is a single extant reality comprising the universe and all our conscious experiences, both of the physical world and of our own inner realities, which are complementary, and as Pauli has commented, like quantum complementarity, subjectivity and objectivity are mutually exclusive, although both are essential complements to sentient existence, which are not bijectively dual, but are  interactively coupled, either by hidden correlations or interactive causes in our both possessing subjective consciousness of the world around us and executing our conscious intentional agency, in subjective conscious volition over the physical universe.

 

Psychedelics and the Fermi Paradox

 

The relationship between psychedelics and the Fermi paradox, already discussed in detail earlier, is the most controversial part of Symbiotic Existential Cosmology, so an explanation is in order. The Fermi paradox is the question: Why is there no evidence of life out there in the universe, given that from what we already know, it is a very probable occurrence? Is it because it’s hiding, or because dominant cultural civilisations become unstable in evolutionary time scales and self-destruct in a Fermi extinction?  Even if it is extremely unlikely in the universe as some measures suggest (Sandberg, Drexler & Ord 2018), the primary importance of protecting life on Earth is commensurately increased. Brian Cox on the BBC ahead of COP26 warned that  Earth’s demise could rid galaxy of meaning  Brian notes on the BBC that ‘Unique events that led to civilisation mean its demise could ‘eliminate meaning in the galaxy for ever’.

 

The word psychedelic means “psyche revealing”, where psyche means the human mind, soul, or spirit and los means clear, manifest. The classic 5HT2a psychedelics are paradoxical serotonin receptor super-agonists that appear to induce a distinct second signalling pathway that sets off different processes from simply a surfeit of serotonin, putatively  involving serotonin-glutamate receptor dimers that metabotropically modify glutamate excitation across the cortex. By contrast, both SSRI anti-depressants and entactogens affect the serotonin transporter to increase serotonin levels at the receptor, but the entactogen MDMA is not a psychedelic. It gives a strong serotonergic high amid touchy feely ‘love drug’ sensations, possibly because of secondary oxytocin effects, but not a psychedelic ‘trip’.

 

Psychedelic species and substances set off a flood of incoming sensory stimulation, combined with a tendency for the default mode network to go silent, resulting in the experience of ego loss, and in combination with meditative repose, can become full blown quantum change experiences, in which religious people may experience God or “divinity” alternatively described as “ultimate reality” by the non-religious. Hence the name enthogen is applied to their ritual use in spiritual and shamanic contexts. Of course, in negative contexts, this can become Heaven and Hell as Aldous Huxley pointed out, so good conducive contexts are critical. But the fact that psychedelics are being used to treat terminal illness and severe depression successfully shows that, even in dire contexts, they can be very beneficial.

 

Psychedelics effectively induce a situation where subjective consciousness begins to derive an internal model of the reality of its own internal dynamics, invoking kaleidoscopic geometric figures and synesthesia – resonant mixed sense-mode experiences, in which accentuated daydreaming evolves into veridical visions – actual  perception of observational realities ‘out there’, and finally the existential experience of union with the ‘other’ – the signature of Brahman-atman and peak accounts of near death experiences. 

So while meditation alone can result in states of peaceful mindfulness and compassion and in some people mystical experience, brain studies of meditation reveal a more controlled state of oscillatory synchrony amid a sparser spiritual or mystical experience. This explains why moksha is so rare and why the Eastern tradition resorts to reincarnation to seek enlightenment in a future life cycle, although some people with conducive genetic biochemistry enter these states more often naturally and can become mystic visionaries, leading both to new religions and cultic misadventures.

 

Fig 110: An artists portrait of a psychedelic experience.

 

Psychedelics can reveal, in far greater first-person depth and illumination, the dimensions of the visionary experience shared by religious mysticism and meditative repose, and do it in a way which is characteristic of those aspects of near death experience that take us to the very edge of existence. The psychedelic experience gained from ingesting a psychoactive species is not a false vision, or inferior to a pure spiritual experience and is modulated as part of the attentive experience of the subjectively conscious observer, who remains able to distinguish reality from fiction.  Bringing all these together, we have a research recipe for the subjective equivalent of the LHC in cosmological physics. We can tie this phenomenon to edge-of-chaos modulations disrupting the perceptual filters in our neurodynamics that our evolution as a species has selected for, to confine the observed plethora of internal and external information to a vastly reduced core subset, tuned down for organismic survival.

 

The key markers of human selection have been Machiavellian social intelligence on the one hand and the “mating mind” (Miller 2000) on the other, where astute female reproductive choice has led to an XY-chromosomal peacock’s tail runaway of super-intelligence (Fielder & King 2017) amid the softer, kinder aspects of perpetual human sexual receptivity amid human sensual love and longing, but nevertheless, with the loss of innocence of our original virtues of verifiable trust in good character over time during our gatherer-hunter phase in small bands of people, human evolution since has clearly been towards an increasingly urban patriarchal culture of human dominance over other humans, the female sex and the diversity of nature, enshrined in the growth of the egotistical mind as an overblown survival tool turned into the pursuit of dominance and power. 

 

Human cultures have reacted to this dilemma by altruistic punishment, asserting prescriptive highly incorrect moral cosmologies, while extolling the ideals of virtuosity and compassion. In recent form, this has led to utopian notions of humanity, combined with artificial intelligence, and technology, becoming dominant forces in the mechanistic universe, violating our symbiotic relationship with nature and woman alike, leading to the mass extinction of biodiversity and our own attrition or extinction.

 

In the pursuit of longevity, amid human cloning and genetic modification, those seeking human immortality are trying to turn themselves back into parthenogenetic organisms whose life span in indeterminate, but this is ultimate selfishness. No parthenogenetic species can survive long term without sexual recombination and the biosphere to support it so this is a cul de sac and evolutionary suicide, as is any form of species dominance that violates the fundamental principle of biospheric symbiosis of all species within it.

 

So the question remains: Why are there psychedelics and do they have a meaningful role in the evolutionary process? If we go back to biogenesis itself, we have a partially solved problem of far-from equilibrium fractal molecular chemistry breaking through to the RNA era. Nevertheless, given the organic gas clouds of Orion, the lost city vents littering the primordial olivine ocean floor and other catalytic processes, all except the hard end of intelligent designers accept that this is a natural process. Likewise for the eucaryote emergence, which after all is a symbiosis of the two complementary foundational life forms, archaea and bacteria. We also have the tendency of adventitious mutation within the biosphere’s diversity to produce a plethora of organic molecules, so it’s hardly surprising if some of these are psychedelic, in our case on Earth most prominently a mere three – psilocin, DMT, and mescalin.

 

But there is another side to this story. The human brain is actually an intimately coupled society of some 1010 neurons communicating and modulating their activity through the same core types of neurotransmitter amoebo-flagellates evolved as single-celled organisms for social signalling. We think of the brain as predominantly electrodynamic, but it is a profusely sappy biochemical organ in which electrodynamic excitation is punctuated and mediated by biochemical synaptic transmission. The metabotropic receptors modulate key dynamical modes from direct excitation and inhibition (glutamate, GABA and acetyl-choline) through waking activity vigilance, mood and drive (nor-epinephrine, serotonin and dopamine) to the stages of light and dreaming sleep involving several of these.

 

An evolutionary key here is that these neurotransmitters are also formative developmental morphogens. Serotonin for example retains a similar role in human brain development  to that as the fruiting tip organiser in Dictyostellium, extending from maternal serotonin inducing the neural crest all the way through to the ascending 5HT1a pathways fanning out across the cortex (later modulating wake and sleep modes) acting in development as primers for the cells which organise the layered structure of the cortex. This means that modulating neurotransmission can alter primal evolutionary survival modes evolved through the deep developmental roles conserved since our single celled ancestors by the same neurotransmitters

 

So the hypothesis connecting psychedelics and the Fermi paradox is that evolution climaxing to the emergence of a dominant cultural species like Homo involves a narrowing of the experiential filters to promote species dominance which then becomes a critical flaw, in species dominance and exploitation precipitating a mass extinction of the diversity of life through habitat destruction and the depletion of resources, in a non-renewable energy consumption and population burst, causing severe climatic change and disrupting planetary habitats on a global scale. This is the Fermi apocalypse catastrophe we are witnessing. 

 

The natural antidote to the destructive Fermi apocalypse is that the kind of climax driven by a dominant cultural species also occurs at peak biodiversity, after a long period of fecund prosperity, in our case since the Tertiary-Cretaceous extinction, and this results in a ‘salting’ of the biosphere with biomolecules which modulate the neuronal activity of a dominant species in such a way as to carry its conscious neurodynamics closer to the climax edge of experiential chaos than its own evolutionary species selection became adapted for. This is then a way the fullness of evolutionary climax comes to reestablish the symbiotic biosphere, because changing the consciousness of the dominant species liberates it from its evolutionary constraints. Because any dominant cultural species seeks to understand what the hell it is doing in the universe, this becomes a catalyst for its own self-discovery.

 

This is not a magic process of divine psychedelic intervention, but might come to explain the other good half of the Fermi paradox. A dominant species which discovers its psychedelic species and learns to use their paradoxically disturbing properties, then begins to protect and even cherish its biosphere as sacred and integral to the cosmic “design”, for the lack of a better word, and thus, instead of proudly announcing its dominant existence to all comers, or failing one of the many triple witching hours of its own cultural, political, economic and environmental instabilities, instead settles into discovering the abyss of its own conscious experience as a convergent symbiosis with the conscious universe as a whole.

 

This is what I term cosmological symbiosis, thereby settling into a much more cerebral perennially immortal existence, complemented by the use of renewable technology and adroit strategies to protect the biosphere from astronomical crises, such as massive impacts and nearby supernovae, by careful use of its solar system habitats to avoid putting all its eggs in one basket and to remain concealed in the universe at large, to avoid predatory exploitation from without. This is the Fermi paradise on the cosmic equator in space-time.

 

In this sense, the notion of union with the cosmic mind is also not a magical process, although it is magical to experience and spiritual to the core. It is simply allowing our own subjective consciousness to reach edge of chaos climax while running egoless in neutral, thus enabling a rapprochement between the organismic brain and the foundational dynamics that invoke the mind at large, as an archetypal dynamical system, expressing in the fullest and most resonant way, the consciousness of the universe, expressed through its climax biota. The differences from traditional spiritual realisation can also be seen in its reverence of nature as sacred consistent with the Weltanshauung of Immortality, rather than the mind-sky view of cosmic transcendence over nature, all too anthropomorphically seeking only human transfiguration in enlightenment I the Eastern traditions, where nature is just degenerate sentient beings and the biological and genetic basis as natures embodied sacredness is set aside.

 

So the healing is this. Evolution of the biosphere has a safety valve for the human egoistical destruction of Gaia. Its just a product of the sheer adventitious fecundity of evolution, not a magical phenomenon. But it has the capacity to bridge the gap between humanity as unnatural evil and humanity as we are nature. This is proven by the fact that the very substances suppressed by Western culture in the 1960s, in an echo of the Inquisition and witch hunts, have now become a cutting edge treatment for severe depression and the existential nightmare of terminal illness.

 

Now here is the deep paradox. The Western religious tradition, sourcing from Yeshua is a sacramental tradition. The synoptic gospels claim Jesus instituted the carnivorous sacrificial sacrament of his own flesh and blood in the last supper. And he took the cup, and when he had given thanks, he gave it to them: and they all drank of it. And he said unto them, This is my blood of the new testament, which is shed for many.” But where did this historical disjunct, so un-Hebrew arise? We have to look further to his all too Dionysian miraculous behaviour supported by the women of Galilee out of their very substance. The entire mission is an apocalyptic presentation of Dionysian tragic theatre, inherited from the sweeping victory of Alexander the Great, leaving in his wake a Greek culture that extended across Syria and Jordan to the Nabatean cultural climax in Yeshuas time, where Dhushara became his manifestation. Dionysus is the god of wine and altered states supported by the maenads whose pupils were dilated with belladonna.


Fig 110b: A picture of climax biospheric symbiosis. Cover of UNEP Global Biodiversity Assessment 1995 for which I was a reviewer, with mushroom and jaguar.

 

I didn't invent psilocybe mushrooms, the evolutionary biosphere did. I concealed my use of them for 50 years and went to Jerusalem in the millennium to pronounce the epoch of the Tree of Life because of them telling me so a decade and a half before, but now they have been researched scientifically and found to have genuine transformative value. But we all have to accept the fact that the founding religion of Western culture is sacramental and it is no coincidence that I now stand here as the spokesperson for the Tree of Life holding the sacraments of the biosphere. That is coming out of left field but that is the healing because it is the scientific way for egotistical human nature to become healed in time to save life. This doesn't mean everyone should be taking psychedelics. It just means that they are part of the solution and when they are accepted as such, a new horizon beckons. Enough of a good part, given their newly recognised status to make a real transformation of human culture for the better.

 

My experience of Brahman in the moksha epiphany that triggered this cosmology was not a false vision. I stand by the Upanishads! I stand likewise by the Gospel of Thomas. I stand also by Maria Sabina and Gordon Wasson. My June experience was made all the more genuine because I am an extremely sane scientist fully aware that I have intentionally taken my conscious neurodynamics to the edge of chaos to liberate it from the filters of my own evolutionary confinement. I can then return to the fold and set to work to discover what kind of universe can actually make this all possible in scientific terms. Instead of a raft of religious edicts and doctrine and some mystical poetry, humanity gets a viable cosmology of quantum reality and conscious existence that is verifiable and detailed across an increasingly vast spectrum of disciplines.

 

 


Symbiotic Existential Cosmology Next Section

Footnotes

[0] Ephaptic coupling a form of communication within the nervous system involving the coupling of adjacent nerve fibers caused by the exchange of ions between the cells, or as a result of local electrical fields, but distinct from direct communication systems such as synapses.

[1]   This document is an informational spore,  finer than a mustard seed,  for free circulation in intact form, to illuminate through a creative commons. 

    All quoted research is cited to the original authors, consistent with the principles of academic review in the widest pursuit of knowledge.

[2] perennial lasting or existing for a long or apparently infinite time; enduring or continually recurring.  From Latin perennis ‘lasting the year through  

    -ial. Oxford languages. The term is used to indicate lasting throughout the lifetime of conscious existence in the universe.

[3] Author Email: dhushara@dhushara.com Web: dhushara.com  This article is open commons CC BY-NC-ND 4.0. Please distribute.

[4]  efficacy – the ability to produce a desired or intended result. Latin: efficere accomplish.

[5] The cosmic web has also been raised as a possible source of fractal complexity (Vazza & Feletti 2020).

[6] The approach of SED is guided by the hypothesis of the existence of the (random) zero-point radiation field, ZPF. This rather more elaborate approach goes through a statistical evolution equation in phase space, to arrive at a description in x-space, in which the dissipative and diffusive terms are seen to bring about a definitive departure from the classical Hamiltonian dynamics.

[7] I am not suggesting that everyone should take these agents to achieve such states, but just that they need to be respected as having these potentials for existential insight by society as a whole. Neither am I recommending that people take them without expert guidance, at least at the outset. Neither am I suggesting they be taken by minors, until the age of full adult legal consent.

[8] Veridical – truthful, coinciding with reality.   Etym. "speaking truth" Latin veridicus "truth-telling, truthful," from verum "truth,"

     neuter of verus "true" + dic-, stem of dicere "to speak".

[9]  Natural selection has shaped our perceptions to be, in the typical case, accurate depictions of reality, especially of those aspects of reality that are critical for our survival. “People could not orient themselves to their environments, unless the environmental information reaching them through the various sense organs offered a perception of space that corresponds to their physical “reality.” Such perception is called veridical perception – the direct perception of stimuli as they exist. Veridical perception also causes a person to experience changing stimuli as if they were stable: even though the sensory image of an approaching tiger grows larger, for example, one tends to perceive that the animals size remains unchanged. One perceives objects in the environment as having relatively constant characteristics despite considerable variations in stimulus conditions” (Britannica, APA).

[10] empirical based on, concerned with, or verifiable by observation or experience rather than theory or pure logic.

    Etym. Greek empeirikos, from empeiria ‘experience. Observational empiricism is appropriate for verifiable physical investigation

     and experiential empiricism for veridical conscious affirmation.

[11] efficacy – the ability to produce a desired or intended result.

[12] This use of Kali is as a demon distinct from the Goddess  Kālī, who like Brahman is conceived of as “ultimate reality”.

[13] animism – the belief that all things animals, plants, rocks, rivers, weather systems etc. possess a distinct spiritual essence – as

   animated and alive.

[14] Humanity is doomed” All countries 56%, Australia 50%, Brazil 67%, France 48%, Finland 43%, India 74%, Nigeria 42%, Phillipines 73%, Portugal 62%, UK 51%, USA 46%

[15] dharmathe eternal and inherent nature of reality, regarded in Hinduism as a cosmic law and in Buddhism the nature of reality 

   regarded as a universal truth.

[16] Hieros gamos or Hierogamy (Greek ἱερὸς γάμος, ἱερογαμία "holy marriage") is a sacred marriage that plays out between a god and a goddess, especially when enacted in a symbolic ritual where human participants represent the deities

[17] Galileo despite being excommunicated for his science, was still a devoted Catholic and a traditional patriarch. He never married his children's mother and deemed his daughters unmarriageable, and soon after Virginia's thirteenth birthday he placed both girls at the Convent of San Matteo, where they lived the rest of their lives in poverty and seclusion. By contrast Galileo's son Vincenzio, who was born in Padua like his two sisters: Virginia and Livia, was named after his grandfather, and after his mother's death, his birth was legitimised by the Grand Duke of Tuscany. Virginia was Galileo's first child, born in Padua, Italy the same year that the Dominican friar Giordano Bruno was burned at the stake in Rome for insisting that the Earth travelled around the Sun instead of remaining motionless at the centre of the universe. Although none of Galileo's letters are known to have survived, 120 of Maria Celeste's exist. These letters, written from 1623 to 1634, depict a woman with incredible brilliance, industry, sensibility and a deep love for her father (Sobel 1999).

[18] veridical – coinciding with reality (Oxford Languages). Mid 17th century: from Latin veridicus (from verus ‘true’ + dicere ‘say’) + -al.

In psychology – of or relating to revelations in dreams, hallucinations, etc, that appear to be confirmed by subsequent events

[19] Vitalism is the belief that "living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things”. Where vitalism explicitly invokes a vital principle, that element is often referred to as the "vital spark", "energy" or "élan vital", the vital force or impulse of life, a creative principle held by Bergson to be immanent in all organisms and responsible for evolution, which some equate with the soul.

[20] Arthur Schopenhauer concluded that the inner reality of all material appearances is Will. Where Immanuel Kant had concluded that ultimate reality - the "thing-in-itself" (Ding an sich) - lay beyond being experienced, Schopenhauer postulated that the ultimate reality is one universal will.

[21] intentionalitythe fact of being deliberate or purposive. mental states (e.g. thoughts, beliefs, desires, hopes) which consists in their being directed towards some object or state of affairs. Intentionality is chosen rather than causality to include the effect of will complementing physical causes.

[22]  No not take too much stock by Castaneda’s writings. His accounts of mushrooms and peyote are unreliable and he gathered a cult following of a troupe of woman writers, who appear to have died or committed suicide in Death Valley after he died.

[23] Weltanschauung  – a particular philosophy or view of life; the world view of an individual or group: welt "world" (see world) + anschauung "perception" (related to English show). William James (1868)

[24] This article and the complementary one “Natty Dread and Planetary Resplendence” were co-conceived out of a quantum change experience evoked by psilocybe mushrooms. Taken together they inform a sacramental paradigm shift towards planetary survival.

[25] psychedelic mind-manifesting" psychē (ψυχή, soul"), dēloun (δηλοῦν, "to make visible, to reveal”), as opposed to hallucinogenicinducing hallucinations and psychotomimetic (psycho- mind + mīmētikós, imitative) mimicking psychotic behaviour/personality.

[26] Feynman notes in his Nobel address: “I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, ‘Feynman, I know why all electrons have the same charge and the same mass’ ‘Why?’ ‘Because, they are all the same electron’!suppose that the world lines which we were ordinarily considering before in time and space instead of only going up in time were a tremendous knot, and then, when we cut through the knot, by the plane corresponding to a fixed time, we would see many, many world lines and that would represent many electrons, except for one thing. If in one section this is an ordinary electron world line, in the section in which it reversed itself and is coming back from the future we have the wrong sign to the proper time to the proper four velocities and thats equivalent to changing the sign of the charge, and, therefore, that part of a path would act like a positron. But, Professor, I said, there arent as many positrons as electrons.”  This became the basis of his representation of positrons as electron holes and for the entire Feynman diagram approach to quantum field theories.

[27]David Bohm’s (1952) pilot wave theory posits a real position and momentum for a particle such as a photon guided by a particular non-local form of pilot wave. It illustrates a form of hidden variable theory which does not require collapse of the wave function, but the predictions hold only for a situation where no new particles are created with new degrees of freedom during the trajectory. Its interpretation is thus inconsistent with the Feynman approach, where the transition probability includes all paths and all possible virtual particles created and annihilated during the transition. To the extent that its predictions coincide with those of quantum mechanics, phenomena, from weak quantum measurement (Kocsis et al. 2011) to surreal Bohmian trajectories (Mahler et al. 2016) can also be interpreted correctly by entanglement using standard quantum mechanics.

[28] The approach of SED is guided by the hypothesis of the existence of the (random) zero-point radiation field, ZPF. This rather more elaborate approach goes through a statistical evolution equation in phase space, to arrive at a description in x-space, in which the dissipative and diffusive terms are seen to bring about a definitive departure from the classical Hamiltonian dynamics.

[29] moksha – derived from the Sanskrit word muc (to free), the term moksha literally means freedom from samsara (existential illusion). This concept of liberation or release is shared by a wide spectrum of religious traditions, including Hinduism, Buddhism, and Jainism.

[30] Synesthesia: a perceptual phenomenon in which stimulation in one sensory or cognitive mode leads to experiences in a second mode.

[31] This follows in line with the filter theories of Henri Bergson Matière et Mémoire (1896) and William James  Human Immortality. (1898) 

[32] Bob Jesse and Bill Richards are co-authors of Roland Griffith’s 2006, 2008 mystical experiences studies.

[33] entheogen "god (theos) within", is a psychoactive substance that induces alterations in perception, mood, consciousness, 

   cognition, or behavioufor the purposes of engendering spiritual development or otherwise in sacred contexts. (Wikipedia)

[34] holotropic “wholeness seeking” – states which aim towards wholeness and the totality of existence – e.g. Brahman–atman.

[35] epistemic humility – a posture of scientific observation rooted in the recognition that (a) knowledge of the world is always interpreted, structured, and filtered by the observer, and (b) scientific pronouncements must be built on the recognition of observation's inability to grasp the world in itself.

[36]  science“what is known, knowledge (of something) acquired by study; information;" also "assurance of knowledge, certitude, certainty," from Old French science "knowledge, learning, application; corpus of human knowledge" (12c.), from Latin scientia "knowledge, a knowing; expertness," from sciens (genitive scientis) "intelligent, skilled," present participle of scire "to know," probably originally "to separate one thing from another, to distinguish," related to scindere "to cut, divide" (from PIE root *skei- "to cut, split;" source also of Greek skhizein "to split, rend, cleave," Gothic skaidan, Old English sceadan "to divide, separate").  Etymonline

[37] empirical based on, concerned with, or verifiable by observation or experience rather than theory or pure logic.

    Etym. empiric via Latin from Greek empeirikos, from empeiria ‘experience, from empeiros skilled’ (based on peira trial, experiment’).

    experimental late 15th century having personal experience, also experienced, observed’: from Latin experimentum practical experience

[38] Organisms are commonly regarded as: (1) phenotypes that interact with their environments, that survive, reproduce, and pass on genes and (2) the entities that are producedby genes. Hull (personal communication) argues that his notion of interactoris significantly different from Dawkins’ ‘vehicle. As in (1), Hulls regards his concept as a populational notion (the population of entities that directly interact with the external environment), while Dawkins’s ‘vehicle, as in (2), is regarded as more embryological (vehicles are the entity that replicators produce).

[39] heredity (n.) "the passing on of physical or mental characteristics genetically from one generation to another" Etym. 1530s, "inheritance, succession," from French hérédité, from Old French eredite "inheritance, legacy" (12c.), from Latin hereditatem (nominative hereditas) "heirship, inheritance, an inheritance, condition of being an heir." Legal sense of "inheritable quality or character" first recorded 1784; the modern biological sense "transmission of qualities from parents to offspring" seems to be found first in 1863, introduced by Herbert Spencer.

[40] It has been claimed that concealed estrus is not an evolved trait but an underlying primate condition  and that overt estrus is an evolved trait due to sexual and social selection e.g. in chimps and bonobos (Laland & Brown 2002 13), as the majority of primate species, including most apes, do not reveal their time of ovulation.

[41] Anneken Hendriks was an Anabaptist of Friesland, living in Amsterdam. Through treachery she was taken prisoner by the city officials. Because she held fast to her faith, she was severely tortured on 27 October 1571, with the intent of learning from her the names of other Mennonites. But even this ill treatment could not make her recant, and consequently she was put to death on the town square (Dam) on 10 November 1571. The execution took place in an unusually cruel manner. Anneken was tied to a ladder; her mouth was filled with gunpowder, and in this condition she was carried from the city hall to the ignited stake and thrown into the  flames. She was fifty-three years old and an ordinary woman who could neither read nor write. In her sentence she was condemned because "she was married according to Mennonite custom, and at night in a country house." The Martyrs Mirror records that there is a song concerning her, but gives no further information, apparently meaning the song found in the Dutch hymn book Veelder-hande Liedekens (1569), which begins "Ick moetu nu gaen verclaren, Watter t'Amsterdam is geschiet" (I must now declare to you, What took place at Amsterdam).

[42] The approach of SED is guided by the hypothesis of the existence of the (random) zero-point radiation field, ZPF. This rather more elaborate approach goes through a statistical evolution equation in phase space, to arrive at a description in x-space, in which the dissipative and diffusive terms are seen to bring about a definitive departure from the classical Hamiltonian dynamics.

[43] It is interesting to note that the surrealist movement in art claimed that there was more to reality than mere outward manifestations. There was a deeper reality (literally surreal means super reality) that lay behind outward appearances. When the word surreal is used with its intended meaning, then surreal trajectories is the correct term to describe them! Unfortunately [Englert et al.] use the term in a pejorative sense” (Hiley et al.)

[44] idiosyncrasy a mode of behaviour or way of thought peculiar to an individual idiosunkrasia, from idios own, private’ + sun ‘with’ + krasis ‘mixture’.

[45] veridical – coinciding with reality (Oxford Languages). Mid 17th century: from Latin veridicus (from verus ‘true’ + dicere ‘say’) + -al.

[46] ergodic– relating to, or denoting (e.g. chaotic) systems or processes with the property that, given sufficient time, they include or impinge on all points in a given space and can be represented statistically by a reasonably large selection of points.

[47] satori – sudden enlightenment Oxford Lang. See also subitism derived from the French illumination subite (sudden awakening).

[48] singulare tantum  a noun which appears only in the singular form – objects which may in principle be counted but are referred to as one.

[49] In physical cosmology, the Copernican principle states that humans, on the Earth or in the Solar System, are not privileged observers of the universe. (Wikipedia)

[50] Large Hadron Collider responsible for discovering the Higgs boson completing the standard model of physics, fig 29.

[51] animism (Latin: anima, 'breath, spirit, life') 

[52] It is empirically true that global heating “punishes” humanity in clear functional terms, but not as a moral punishment in the religious sense. Gaia may even have full agency in a sense we don’t yet appreciate. “Why not?”, as physicist Brian Josephson commented to me citing James Lovelock. The question we have to ask is this: Is a tornado less alive than a prokaryote? A prokaryote is tightly controlled as a genetic process and likely not conscious, but a thunder storm is in a sense more alive in the way our brains are dynamically. If alive means primitive subjectivity then a thunder storm should be accepted as alive in that sense. Any physical system capable of unstable autonomous dynamics is a candidate. Attributing agency in this way might have a deeper basis in consciousness understanding quantum reality from personal experience. What kind of form tornado or Gaia secret life might take is no easier to estimate than the putative "free will" of a quantum. But it IS an empirical question!

[53]perdidolost, done for, ruined, defeated or about to be killed etc. Cambridge Dictionary

[54] Advaita Vedanta (Sanskrit: अद्वैत वेदान्त –"non-duality"), propounded by Gaudapada (7th century) and Adi Shankara (8th century), espouses non-dualism and monism. Brahman is held to be the sole unchanging metaphysical reality and identical to the individual Atman. The physical world, on the other hand, is always-changing empirical Maya. The absolute and infinite Atman-Brahman is realized by a process of negating everything relative, finite, empirical and changing. All souls and their existence across space and time are considered to be the same oneness. Spiritual liberation in Advaita is the full comprehension and realization of oneness, that one's unchanging Atman (soul) is the same as the Atman in everyone else, as well as being identical to Brahman.

[55] Brihat Bṛhat (बृहत्).a. (-tī f.) [बृह्-अति (bṛh-ati)] (1) Large, great, big, bulky; (2) Wide, broad, extensive, far-extended (3) Vast, ample, abundant

(4) Strong, powerful (5) Long, tall (6) Fullgrown (7) Compact, dense (8) Eldest, or oldest (9) Bright.  Aranyaka (āraṇyaka) "produced, born, relating to a forest" or rather, "belonging to the wilderness". It is derived from the word Araṇya (अरण्य), which means “wilderness".

[56] velada soirée, nighttime meeting,  literally  a shamanic mushroom vigil  Spanish velar Latin vigilō ("to watch, guard”) as in vigilant.

[57] This article and the complementary one “Natural Entheogens and Cosmological Symbiosis: Solving the Central Enigma of Existential Cosmology” were co-conceived out of a quantum change experience. Taken together they inform a sacramental paradigm shift towards planetary survival.

[58] eschatology the part of theology concerned with death, judgement, and the final destiny of the soul and of humankind.

    Greek eskhatos ‘last’ + -logy.

[59] Inanna the Queen of Heaven’s  descent into Hell, stripped one by one of her seven veils, by the Galla of her sister Ereshkigal’s domain of Hell, before returning to let them sacrifice her beloved husband and partner Dumuzi for usurping the sovereign’s powers in her absence, only to have him resurrected and sacrificed seasonally as a God of fertility.

[60]  “Matthew collected the logia in the Hebrew dialect and each one interpreted them as best he could."

[61] Matthew’s Greek uses πιθυμέω (epithūméō) to set one's heart on a thing, desire, covet. Passion has a more benevolent sexual meaning

     involving infatuation and love, but the Greek  πάσχω (páskhō) is "to suffer, to be acted on, to undergo, experience” rather than an act, or desire.

[62]  Titled in recognition of The Dialogue of the Saviour (Robinson 1990).

[63]Koine Greek 'Common Greek', also known as Alexandrian dialect, common Attic, Hellenistic or Biblical Greek, was the common supra-regional form of Greek spoken and written during the Hellenistic period, the Roman Empire and the early Byzantine Empire.

[64] Akiva (50-135 CE) was instrumental in drawing up the canon of the Tanakh. He stoutly defended the canonicity of the Song of Songs, (as the Holy of Holies) and of Esther, despite it being an allegory of Ishtar. He was executed by the Romans after the Bar Kokhba revolt. Akiva said of Bar Kochbah 'This is the King Messiah'.  Johanan ben Torta retorted: "Akiva, Grass shall grow from your cheeks and yet the son of David shall not appear”.

[65] I noticed in writing this that this expression comes from Matt 13:35 claiming Yeshua is revealing his deepest insights, right after Yeshua pronounces the parable of the mustard seed, which is also quoted in Thomas 20. The parable says the Kingdom of Heaven (not the Christian religion) is effectively the seed of the greatest herb, the Tree of Life in whose branches the birds lodge.

[66] Hieros gamos or Hierogamy (Greek ἱερὸς γάμος, ἱερογαμία "holy marriage") is a sacred marriage that plays out between a god and a goddess, especially when enacted in a symbolic ritual where human participants represent the deities.

[67] The grandson of one of the founders of Tel Aviv.

[68] shekhinah שכינה  "dwelling" or "settling" and denotes the indwelling of the divine presence of God manifest in the tent of Sarah, also linked to ruach ha-kodesh רוחהקודש, the divine influence of God over the universe or living creatures i.e. Holy Spirit.

[69] Weltanschauung  – a particular philosophy or view of life; the world view of an individual or group: welt "world" (see world) + anschauung "perception" (related to English show). William James (1868)

[70] I have always defended the Song of Songs as the most fertile expression of the Sacred Reunion but for this, a Zionist woman from Tsvat threatened to report me to Mossad as a Gentile “thief in the night”. Our dialogue is recounted in my song Black Rose video.

[71] In Christianity, the Holy Ghost, or Spirit, is the ultimate reality not to blaspheme against: “ Jesus said: He who blasphemes against the Father will be forgiven, and he who blasphemes against the Son will be forgiven; but he who blasphemes against the Holy Spirit will not be forgiven, either on earth or in heaven. (Thom 44, as in Mark 3.28, Luke 12.31, Matt 3.28 with the exception of the Father). The grammatical gender of the word for "spirit" is feminine in Hebrew (רוּחַ, rūaḥ), neuter in Greek (πνεῦμα, pneûma) and masculine in Latin (spiritus). The neuter Greek πνεῦμα is used in the Septuagint to translate the Hebrew רוּחַ. Holy Spirit was equated with the feminine Wisdom of God by two early Church fathers.

[72] efficacy – the ability to produce a desired or intended result.

[73] Albedo is a quantity that indicates how well a surface reflects solar energy. ... the "whiteness" of a surface, with 0 meaning black and 1 meaning white. A value of 0 means the surface is a "perfect absorber" that absorbs all incoming energy.

[75] Weltanschauung is used as an English word, from the German because the English worldview is too vague and not comprehensive enough. (For anschauen = to look at, rather with the meaning "to take a good look at", for schau = to show, display, as opposed to blicken = to look, or aussehen from sehen = to see). Primarily it means a way a person looks at the phenomenon of life as a whole. Some people (particularly those who have not lived very long) have not formed any broad (inclusive, even "sophisticated") view of life. Others consider a large number of factors before forming their overall view — maybe in their seventies — of the phenomenon of human existence. Typically a person's Weltanschauung would include a person's philosophic, moral, and religious conclusions — including e.g. the duality of spirit and matter — and perhaps their conclusions about the origins of the universe and of the development of life.

[76] According to Hesiod, when Prometheus (fore-sight) stole fire from heaven, Zeus, the king of the gods, took vengeance by presenting Pandora to Prometheus' brother Epimetheus (hind-sight). Either Pandora or Epimetheus  opened the jar left in her care containing sickness, death and many other unspecified evils which were then released into the world, or the lost blessings of the Gods as the other story goes. Though she hastened to close the container, only one thing was left behind usually translated as Hope.

[77] According to Arthur Schopenhauer, it is the laws of nature that arise from a transpersonal will, not the will from the laws of nature. Felt volitional states are the irreducible foundation of both mind and world. For Schopenhauer the inner essence of everything is conscious volition that is, will. Nature is dynamic because its underlying volitional states provide the impetus required for events to unfold. Even in the absence of all self-perception mediated by the sense organs, we would still experience our own endogenous, felt volition. Will is indeed free because it is all there ultimately is.

[78]Spiritualitas L. from spiritus n. which means 'the breath of life', also psyche, or soul. Traditionally, spirituality referred to a religious process of re-formation which "aims to recover the original shape of man", oriented at "the image of God" as exemplified by the founders and sacred texts of the religions of the world and within early Christianity to refer to a life oriented toward the Holy Spirit and broadened during the Late Middle Ages to include mental aspects of life. In modern times, the term broadened to refer to a wider range of experience, including esoteric and religious traditions. Modern usages refer to a subjective experience of a sacred dimension and the "deepest values and meanings by which people live", often in a context separate from organised religious institutions. It may involve belief in a supernatural realm beyond the ordinarily observable world, personal growth, a quest for ultimate or sacred meaning, religious experience, or an encounter with one's "inner dimension"..

[80] Esther (Ishtar), the beautiful Jewish wife of the Persian king Ahasuerus (Xerxes I), and her cousin Mordecai (Marduk), persuade the king to retract an order for the general annihilation of Jews throughout the empire. The massacre had been plotted by the kings chief minister, Haman, and the date decided by casting lots (purim). Instead, Haman was hanged on the gallows he built for Mordecai. When word of the planned massacre reached Esther, beloved Jewish queen of Ahasuerus and adopted daughter of Mordecai, risked her life by going uninvited to the king to suggest a banquet that Haman would attend. –. Now it came to pass on the third day, that Esther put on her royal apparel, and stood in the inner court of the king's house, over against the king's house: and the king sat upon his royal throne in the royal house, over against the gate of the house. And it was so, when the king saw Esther the queen standing in the court, that she obtained favour in his sight: and the king held out to Esther the golden sceptre that was in his hand. So Esther drew near, and touched the top of the sceptre. Then said the king unto her, What wilt thou, queen Esther? and what is thy request? it shall be even given thee to the half of the kingdom. And the king said unto Esther at the banquet of wine, What is thy petition? and it shall be granted thee: and what is thy request? even to the half of the kingdom it shall be performed. Then answered Esther, and said, My petition and my request is; If I have found favour in the sight of the king, and if it please the king to grant my petition, and to perform my request, let the king and Haman come to the banquet that I shall prepare for them, and I will do to morrow as the king hath said. Then Esther the queen answered and said, If I have found favour in thy sight, O king, and if it please the king, let my life be given me at my petition, and my people at my request: For we are sold, I and my people, to be destroyed, to be slain, and to perish. Then the king Ahasuerus answered and said unto Esther the queen, Who is he, and where is he, that durst presume in his heart to do so? And Esther said, The adversary and enemy is this wicked Haman. Then Haman was afraid before the king and the queen. And the king arising from the banquet of wine in his wrath went into the palace garden: and Haman stood up to make request for his life to Esther the queen; for he saw that there was evil determined against him by the king. Then the king returned out of the palace garden into the place of the banquet of wine; and Haman was fallen upon the bed whereon Esther was. Then said the king, Will he force the queen also before me in the house? As the word went out of king's mouth, they covered Haman's face. And Harbonah, one of the chamberlains, said before the king, Behold also, the gallows fifty cubits high, which Haman had made for Mordecai, who spoken good for the king, standeth in the house of Haman. Then the king said, Hang him thereon. So they hanged Haman on the gallows that he had prepared for Mordecai.