At Sina Habibian’s recommendation, I recently watched the talk “Towards Consciousness Engineering by Max Hodak”. It is thought-provoking throughout. But starting around 46:16, he dives into a self-admittedly-speculative “final leap” that I’d like to first do my best to explain, and then ask some questions about.

I’m not certain I have totally wrapped my head around all this, so I’m publishing with a sort of Cunningham’s Law attitude, in hopes that errors will be pointed out and helpful resources sent my way.

Where we’re going to go

First we’re going to break down the core pieces of Hodak’s proposed field theory of consciousness, digging into the concepts of dynamical systems, free-energy principles, and fiber bundles. Then we’ll explore the most profound output of this idea: a physical mechanism for how the mind could directly influence the brain.

But then, I’ve got my own speculative leap to build on his: what if the brain isn’t a generator of consciousness, but a receiver for a universal field? We’ll look at how this extension might fit in to the physical theory, and how to account for both typical feelings of individual separateness and moments of shared experience.

The three components of the field

Hodak suggests that we can pull together three interesting components related to consciousness (which he defines as “the thing that is modulated by anesthetics and psychedelics”I may have a semantic disagreement with this operational definition, which we’ll address later, but let’s run with it for now for consistency as we discuss what he is proposing.). These three components are:

First: a dynamical system model of neural activity (including a spike-based definition of neuron state, a definition of neural trajectory which follows, a neural flow field which follows from that, and evidence that initial conditions influence neural trajectoryThis is from Vyas et al, Computation Through Neural Population Dynamics (2020)).

Second: we can apply the free-energy principle to that dynamical system to get a measure of variational free energy in that system, with which we can get a Lagrangian.

This is — as I understand it — making some inferential leaps, but I’m generally comfortable running with it. To draw this out a step further: Karl Friston created the idea of the free-energy principle. Here’s the original 2010 paper, and a 2023 followup on “The free energy principle made simpler but not too simple” (emphasis on the “not too simple”).

In some loose sense, you can think of this as the principle of least action as applied to sentient behavior (in practice they are creating different bounds, but the analogy is helpful). The principle of least action (see: excellent Feynman lecture on the matter) suggests — in layman terms — that a physical system’s path between two points is the one that minimizes the “cost” of the path in terms of energy and time. That is, when you roll a ball down a hill, it doesn’t go down a bit and then back up and then down again. It just goes straight down, “the easiest way.”

Friston’s free-energy principle looks at sentient systems (also known as “adaptive agents”), and we can analogize the “cost” or “action” of the principle of least action (applied to physical systems) to “surprise” in the free-energy principle (applied to sentient systems like our brains). Surprise effectively means a deviation from the “natural” or easiest path our brains could take. Doing something hard or perceiving something that is not in line with the easiest thing to perceive is the sort of thing that qualifies as surprise. Our brains are effectively prediction machines, and they’re doing their best to minimize prediction error.

That is — again, in layman’s terms — the free-energy principle says that our brains “do the least work possible” (where work can come from either taking action, or perceiving inputs) to get from point A to point B, based on whatever the current state is of the system they are in.

Now, the inferential leaps as I understand them are that the free-energy principle does make certain assumptions in order to accomplish the necessary math. Those assumptions are probably fine to make for our purposes and are perhaps beyond the scope of this post, but it’s worth noting that they exist (namely: that the dynamics of the system even have a non-equilibrium steady state density (NESS, in FEP parlance), that the state partition admits a Markov blanket, and that the system has white, Gaussian, zero mean noise).

But, overall, let’s run with it. We’ll say the neural dynamical system meets the necessary criteria, and so we can take that description of consciousness and apply Friston’s free-energy principle to get a Bayesian mechanics model including a measure of variational free-energy.

Once you have equations that represent the free energy in the system in various states, you can then do some math to find the minimization of that free energy, which gives you a Lagrangian. The Lagrangian effectively defines the “equations of motion” for the system, or a definition of the physics that applies to that system.

This matters because we’ve now moved from being able to describe the system’s current state to having a tool to predict its ongoing evolution, just like we do with planets or particles.

So we end up with a Lagrangian for the system of the brain — a description of its physics. Nice.

Third: per earlier discussion in his talk, Hodak says we have a fiber bundle which defines a set of symmetries enforced by this system. This is the part that sounds most like abstract math, but the core idea is surprisingly intuitive.

A “fiber bundle” is a way of describing a space that has extra properties attached at every point. Think of the Earth’s surface, flattened on a table. At every single point, you could attach the current temperature in Fahrenheit of that point as a line pointing up of a certain length. The collection of all those temperature lines, one for every point on the globe, forms a fiber bundle.

In Hodak’s theory, the “base space” is the landscape of possible brain states. The “fibers” are the different sensory modalities — vision, hearing, touch, etc.

The “symmetries” are the rules that allow you to transform the content within a fiber without breaking the overall structure. For example, Hodak suggests that we have the “freedom to relabel qualia.” You could, in theory, swap every experience of “red” with “blue” consistently across your entire experience. As long as the relationships between colors remain the same (the newly-blue stop sign is still a different color from the newly-red sky), the underlying physical system wouldn’t know the different.

This freedom to relabel is a type of symmetry known as a “gauge symmetry” (incidentally discussed further in prior post Building intuition for electromagnetism’s symmetric origins). Basically, we are given freedom to describe the field at different points without changing the underlying physics. Moving around doesn’t make the blue and red dichotomy any different.

But in general, Hodak suggests that the constraining symmetries defined by the consciousness fiber bundle govern how different sensory experiences (the fibers) can be combined or “bound” together into a unified conscious moment. The fiber bundle formalism provides the mathematical language to describe this binding process.

Can the mind affect the brain?

So, net of all that, we have three things:

  1. A definition of a dynamical system of neurons, each with states, and when combined, an ability to model how they all flow together. This “stuff” all combines to form an overall state of the brain that results in a subjective experience.
  2. A definition of the “physics” or “laws” of that system, as discovered by minimizing its free energy (or “surprise”), corralling this subjective experience into the state that best matches reality
  3. A set of symmetries enforced by the system that govern how the resulting sensory experiences can be combined (or not). These symmetries allow the color, shape, and sound of a bouncing ball to merge together into one “experience,” addressing what is canonically called the “binding problem”

Once you have these three things — a dynamical field, a Lagrangian that gives you its equations of motion, and a set of gauge symmetries that constrain its structure — you have, as Hodak puts it, “all of the machinery required for a classical field theory.”Now, as he says, just because these components exist is not evidence that such a field theory is real and will accurately describe any particular behavior in the world. I could come up with an arbitrary combination of a made-up dynamical system, its Lagrangian, and a set of symmetries and create “a classical field theory” that has no bearing on reality. But he is supposing that it could be the case, and moreover, comes with some interesting ideas pointing at the same (particularly around the convergence of vector embedding representations to possibly-shared platonic forms, see Huh et al, The Platonic Representation Hypothesis (2024) and Jha et al, Harnessing the Universal Geometry of Embeddings (2025))

A classical field theory is something like electromagnetism or gravity — a prediction of how a “field” (again: think the electromagnetic or gravitational fields) interacts with matter.

So what Hodak is saying here is: perhaps this could all point at a physical theory of consciousness. Going back to his original definition of the term: perhaps there are ways that the interaction of certain “matter” such as anesthetics or psychedelics with “consciousness” could be formally modeled using this approach. Perhaps the laws governing your subjective experience might be just as fundamental and model-able as the laws governing electromagnetism!

In many theories, consciousness is an epiphenomenon — that is, a secondary effect with no bearing on its source (think: heat coming off a computer — it’s real, but it doesn’t do anything to the computation causing it). But if consciousness is a field, that means it is a fundamental physical entity, not just a byproduct — and can have a causally potent effect on the universe.

We usually intuit that brain activity causes conscious thoughts. But if consciousness is itself a field, that field can exert influence back on the matter that creates it. It would provide a physical basis for the idea that the state of your conscious field could influence the firing of your own neurons, and thus a physical mechanism for how your subjective mind could genuinely cause changes in your physical brain.

The perhaps-scientifically-radical implication here is a physical mechanism for that very “downward causation.” We’ve long wondered how the “non-physical ‘mind’” could possibly influence the “physical brain.” This model sidesteps this by rejecting the idea that consciousness is non-physical in the first place; it’s a field as potent as any other.

In this view, your subjective experience of a decision to raise your arm isn’t just an output on neural activity that was already happening; the conscious state itself, as a field, could be what causes the corresponding neural cascade. The conscious “mind” could literally move the “brain.”

So really, Hodak’s “final leap” here is to take the mathematical machinery that describes the brain’s activity and ask what it would mean for that to be more than a useful description. What if it’s a classical field, with effects on matter? Aside from being just straight-up interesting — it would also open up a host of new, falsifiable scientific and philosophical questions about consciousness.

(Maybe this physically explains why the whole Useful Ways of Looking concept is so
 useful?)


He then goes into some further very interesting questions I can’t help but briefly mention, wandering into what would happen if you tried to turn this classical field theory into a quantum field theory, and then asking whether perhaps the field could be a non-PoincarĂ©-invariant gauge field whose excitation states we could describe as “qualia” — specific subjectively-experienced qualities or properties.

That is, roughly, might it be that this “consciousness field” doesn’t yield the same results for all observers — and that is why subjective experiences may vary?

Is consciousness universal?

So, jeez — there’s a lot there already. But in watching and considering this, I couldn’t do anything other than a possible extension even further.

Hodak’s view on the segment of the talk discussed above is that it is speculative and relatively empirically unsubstantiated (although quite logical!), and I would emphasize that my further thoughts below are even less substantiated (and perhaps also less logical
).

His framework is compelling in grounding the subjective in the physical. But its focus remains on the individual brain as the complete system generating consciousness. What if, instead, the brain was not the generator but the receiver? This is where I’d like to explore an extension: that the same approach can be reinterpreted not as creating a brain-specific field, but as mediating a universal one.

I mentioned up top that I have a slight semantic disagreement with Hodak’s definition of “consciousness.” I think of what he is describing throughout as “subjective experiences by a brain.” Or perhaps — Sina’s suggestion to me — “qualia.”

It’s a sort of materialist vs. panpsychist (really cosmopsychist, I think — but panpsychism is more common to discuss) distinction — my feeling (and again, it’s nothing more than a feeling) — is that consciousness is something closer to a universal field. Whether it is always mediated neuronally but beyond the context of a single brain; or mediated otherwise, with neurons as but the first media in which we have observed it, I feel it’s unlikely to be restricted to a single brain as its generating system.

Now, if this were true, we’d have to untangle some conflict with Hodak’s theory. One of his three facets feeding into his proposed field theory of consciousness is the fiber bundle defining the system’s symmetries, and that fiber bundle is specific to the brain we are examining and its particular sensory inputs and capabilities.

In a universal field model, we’d need a new set of symmetries that constrains the behavior of the overall field. This strikes me as quite a bit harder to formalize, but I also don’t see anything that makes it impossible. As mentioned above, Hodak already proposes the possibility that the gauge field could be non-PoincarĂ©-invariant. I believe he is talking about the “single brain system” view of the field, but the same point could help to address the “universal system” view.

Maybe the universal symmetries are not defined by the brain’s architecture, but are a fundamental component of the universal consciousness field itself, just as I understand Lorentz invariance is a fundamental symmetry of spacetime. Our brains, then, might have evolved to process information in a manner compatible with those underlying rules.

But it’s not like Hodak’s brain-specific fiber bundle would disappear. Its role would just shift. Instead of generating consciousness, it would serve to collapse the universal field into a specific, limited, and stable set of subjective experiences. A bat’s brain, via its own fiber bundle, is in some sense “tuned” to different “frequencies” than ours, resulting in a different subjective reality, even though that fiber bundle is mediating the same underlying field.

In some sense, Hodak’s proposal is a bit like asking “what if each of our brains had their own electromagnetic field, governed by the same overall formalism but different characteristics?” But mine is “what if all our brains (and bodies, and things that aren’t even a part of ‘us’) are all in the same electromagnetic field, and the characteristics of each of our brains (including their own fiber bundles — their specific sensory inputs and corresponding symmetries) cause different behavior within the region of spacetime our brain occupies?”

The latter is the reality of electromagnetism — that simple analogy certainly doesn’t actually do anything to prove it also applies to consciousness, but it gets at the root of the extension.

To me, consciousness is the universal quality of awareness. It transcends the individual. Subjectivity, on the other hand, is the experience of consciousness as filtered through the lens of your own mind. (Through Ways of Looking, perhaps?)

Why do we feel separate?

If consciousness is indeed a universal field, it raises an obvious question: why do we — by default — feel so alone in our heads? Why are our subjective experiences so seemingly-private? This is sometimes referred to as the “combination problem” of panpsychismFor simplicity’s sake, I’ve been discussing the interplay of such a “universal field” with “each of us,” implying a focus human brains and their subjective experiences. But if you take all this at face value and chase it down to its natural conclusions, you’ll find that it suggests everything (or at least anything that expends energy to impose some sort of feedback control on the universe, whether a thermostat regulating a room’s temperature or a star balancing the outward pressure of nuclear fusion and the inward pull of gravity) is consciousness at some level. Not in the same way, but in some way. So then you wonder how those “little bits of consciousness” add up to, say, “my (Andy’s) experience of consciousness.”.

Maybe it goes back to the free-energy principle. The brain is an active, predictive agent constantly working to minimize its “surprise” (a.k.a. variational free energy). It builds a sort of internal model of the world and tries to make its sensory inputs accurately fit that model with as little work as possible.

From that perspective, our default state of perceived-separation is a matter of thermodynamic efficiency. Attempting to also model the sensory inputs and internal subjective states of other brains — especially without direct access to their “input devices” — would be extremely energy-intensive and, in practice, not particularly useful for purposes of minimizing surprise.

So it could be that our individual-seeming subjectivity is not a fundamental feature of the universe, but an emergent property of thermodynamic efficiency.

I might also take a slightly-“woo” step forward and suggest that this framework allows for an interesting self-reinforcing loop:

To the extent we are educated and believe that such cross-brain consciousness is impossible, it becomes an even less “useful” type of work for our brains to do. If a brain’s predictive model is built upon the belief that consciousness is fundamentally separate, then any data suggesting otherwise would generate immense “surprise.”

The free-energy principle dictates that the brain should actively explain away or ignore such data to maintain its model. This offers a potential mechanism for how a cultural or personal belief in separation could, through downward causation, physically reinforce the very neural dynamics that produce the experience of separation.


 and how might we not be separate?

Maybe there are ways to “lower the energy barrier” towards even-slightly-more shared consciousness experiences.

A tempting mechanism to bridge the gap is some sort of quantum entanglement. While the common caricature of using quantum mechanics to explain consciousness often faces justified skepticism (our brains are warm, wet, and noisy — not a good environment for coherent quantum states to stick around), certain interpretations of quantum mechanics could offer a sketch of a path forwardFor some hopefully-helpful intuition-building on quantum mechanics, see this post..

Earlier in his talk, Hodak cites physicist Carlo Rovelli for his work on Memory and entropy. But another of Rovelli’s ideas — relational quantum mechanics (RQM) — may well help here.

RQM suggests that there is no singular, true account of the state of reality. Rather, different observersObservers are not just humans or measurement devices, but any object — which, under our theory above — probably all also happen to be at least a little bit conscious. can have differing, accurate accounts of the same system. One might see a system in a decohered, collapsed state. Another might see the same system in a superposition of multiple states. The very idea of “state” must be relative to some observer in RQM.

RQM effectively reframes reality not as a collection of objects with fixed properties, but as a network of interactions. A “fact” about a system only materializes when it interacts with another system—an observer.

Crucially, an “interaction” in RQM could be characterized as the kind of “sensory observation” that a free-energy principle agent must work to predict. When two FEP agents (like human brains) interact, they are not just passively exchanging data; they are actively and mutually actualizing relational properties between them.

This — with our ideas above — suggests that phenomenal coherence could be more than a metaphor.

Concerts, group meditations, even just “the energy in the room shifting” — these all feel like times when subjective experience (which, again, we’re talking about as deriving from the universal consciousness field being mediated through our own brains and their fiber bundle symmetries) begins to cohere a bit. Many people report that simply being open to perceiving others’ experiences can help to make it so.

Maybe it’s as simple as shared attention to a strong, common external signal (the music at a concert) being fed into hundreds of vaguely-similar brains simultaneously. Maybe it’s individual attention to vaguely-similar internal states (your breath in a meditation). In either case, the feedback loops between the brains become correlated, and the perceived “vibe” could emerge as a real, relational property of the entire interacting system. The brains have, temporarily, lowered the energetic cost of modeling each other.

While the “binding signal” between brains is by default low-bandwidth and lossy as compared to the “binding signal” within a single brain, it feels that we can sometimes, in certain circumstances, get at least a few bits (or qualia) of shared experience.

So — and again, we’re far off the reservation at this point, many leaps away from empirical evidence for what we’re exploring — it could be that varied observers can partially-synchronize their subjective experiential states by entangling in some interactive way, knowingly or unknowingly, and then perceiving the same universal consciousness field, as mediated by their individual-but-entangled brains’ fiber bundles.

And so maybe, with the right tools or techniques, the seeming walls that create our individual subjective moments could be not quite as solid as they appear. And maybe those very subjective experiences could have their own downward causative effects on different brains.

It’d really be cool to have a theory for all that, wouldn’t it?


Looking for more to read?

Want to hear about new essays? Subscribe to my roughly-monthly newsletter recapping my recent writing and things I'm enjoying:

And I'd love to hear from you directly: andy@andybromberg.com