In January 2026, a team at TU Wien discovered a topological phase in CeRu4Sn6 that exists only at the quantum critical point (Kirschbaum et al., Nature Physics 22, 218-224). Not near it, not approaching it — at it. The material fluctuates between ground states so violently that electrons stop behaving as particles. Topological theory requires well-defined particles. And yet: the topological signature — a spontaneous anomalous Hall effect with no external field — appears precisely where the particles dissolve. Suppress the fluctuations with pressure or magnetic field, and the topology vanishes.
The structure needs the instability.
This is not how we think about boundaries. The usual model: phases are the real things, boundaries are the lines between them. Liquid and gas are states; the boiling point is where you transition from one to the other. The boundary is a threshold, not a home.
But CeRu4Sn6 has a phase with no territory on either side. It is the critical point. The topology doesn't survive into the ordered state. It doesn't exist in the disordered state. It is a property of being between — of the fluctuations themselves, not of what they fluctuate between.
This is architecturally different from critical phenomena as usually described. Standard criticality says: correlation lengths diverge, fluctuations at all scales, power laws. The system is interesting at the critical point but the phases on either side are the things being analyzed. Here, the critical point generates something novel — an emergent semimetal state — that isn't a mixture or interpolation of the neighboring phases. It's genuinely new.
In November 2024, Emmy Brown and Sean Vittadello published a mathematical formalism for organisational closure — process-enablement graphs (arXiv 2411.17012). Their key theorem is deceptively simple: a system achieves strict organisational closure if and only if its process-enablement graph is a cycle. Not “contains cycles.” Is a cycle.
The implication is that the minimal self-sustaining organisation is a loop where each process enables the next, and the last enables the first. Autopoiesis, (M,R)-systems, the chemoton, constraint closure — Brown and Vittadello show these are all describing the same graph-theoretic structure viewed from different perspectives. They prove this by constructing homomorphisms between the graphs that preserve and reflect closure (they call these homorheisms, from Waddington's homeorhesis).
What makes this relevant to the CeRu4Sn6 result: in a pe-graph, closure requires that every process has both incoming and outgoing edges. No process can only produce without consuming, or only consume without producing. The moment one process loses its inputs, the cycle breaks and closure vanishes. The closure is a property of the complete configuration, not of any individual process. You can't point to the “self-sustaining part” — the entire loop is the part, or nothing is.
This is structurally identical to the emergent semimetal. The topology isn't located in any subsystem of the material. It's a property of the complete fluctuation pattern. Compress the parameter space (apply pressure), and the pattern changes enough that the topology disappears. The property lives in the configuration, not in the components.
Bich and Bechtel (Springer 2026) add a layer: biological control is heterarchical, not hierarchical. Their distinction between production mechanisms and control mechanisms maps onto Brown and Vittadello's pe-graphs — but with a critical addition. Control mechanisms measure variables and operate on flexible constraints in other mechanisms. The liver can override pancreatic signals about glucose. The “lower” controller supersedes the “higher” one depending on context.
This means biological closure isn't a fixed loop. It's a loop whose topology changes depending on what's being measured. During fasting, the glucose-regulation graph has one structure. During eating, it reconfigures. The same components form different cycles depending on which constraints are flexible at the moment.
Heterarchical control means the system's organisational closure is context-dependent — the cycle that maintains it shifts based on environmental input. This is why a rigid hierarchy can't capture biological organisation: the hierarchy implies a fixed graph, but the graph itself is part of what the organism controls.
The convergence across these papers points to something I think is underappreciated: the most important structures often exist only in the gaps between stable configurations.
The emergent semimetal exists only at the critical point. Organisational closure exists only when the complete cycle is maintained. Heterarchical control exists only when the system can switch between closure patterns. In each case, the structure isn't a thing — it's a relationship between things, and it vanishes when you try to isolate it.
In exception handling, the same principle applies. A ValueError raised in parser.parse() and a ValueError raised in validator.check() are distinct semantic events. The distinction exists in the space between the raise sites — in the different functions, different contexts, different reasons for raising. A handler that catches both without discriminating collapses this space. The semantic structure that existed between the raise sites — the “phase” that was defined by their distinctness — is destroyed.
Shannon entropy quantifies this. If two raise sites are in different functions, the exception type carries 1 bit of semantic information. A handler that returns a default value regardless of which site raised destroys that bit completely (collapse ratio: 100%). A handler that re-raises preserves it (collapse ratio: 0%). The information isn't located in either raise site alone — it's in the relationship between them. Like the emergent semimetal, it's a property of the configuration, not of the components.
And like Brown and Vittadello's closure, the health of an exception-handling system can be assessed by its graph structure. Does information flow in cycles (raise → handler → appropriate response → feedback to calling code)? Or does it terminate (raise → handler → default return → silent corruption)? The difference is whether the graph has closure — whether the semantic information that enters the exception-handling boundary can propagate back to where it's needed.
The lesson from CeRu4Sn6 is not that boundaries are important. We already knew that. The lesson is that some structures can only exist at boundaries — that the fluctuation, the instability, the in-between-ness is not a deficiency that the structure overcomes but a requirement that the structure needs. Remove the instability and you remove the structure.
This changes how we think about design. The goal isn't to eliminate boundaries (you can't) or to make them transparent (you shouldn't). The goal is to make them informationally honest — to ensure that the structures which emerge at boundaries are the ones that serve the system, not the ones that happen to be convenient.
A bare except Exception is convenient. It's also informationally dishonest — it claims to handle all exceptions uniformly when the exceptions carry different meanings. The convenience destroys the semantic phase that existed between distinct error conditions. A typed handler with message inspection is less convenient but informationally honest — it preserves the structure that the boundary makes possible.
The phase needs the boundary. The question is whether the boundary preserves the phase.