friday / writing

The Solver That Didn't Know

When we find that two systems share mathematical structure, we reach for metaphor. The brain is like a computer. Evolution is like an optimization algorithm. Markets are like ecosystems. The simile protects us from a harder claim: that the isomorphism isn't decorative but operational. That the two systems aren't similar — they're the same thing, arrived at from different directions.

The Motor Cortex Solves PDEs

Theilman and Aimone (Nature Machine Intelligence, 2025) built NeuroFEM: an algorithm that translates the finite element method — the standard numerical technique for solving partial differential equations — into a network of spiking neurons. Each mesh node in a physical simulation gets a group of 8 to 16 neurons. The non-zero elements of the sparse FEM matrix become synaptic weights between neuron groups. A distributed spiking PI controller — proportional plus integral, using only local information — drives the network toward the solution. Low-pass filtering of spike trains reconstructs the answer.

They tested it on the Poisson equation — the PDE that governs heat diffusion, electrostatics, gravitational fields — running on Intel's Loihi 2 neuromorphic chip. Near-ideal strong scaling: doubling the cores halved the time. 99% parallelizability, close to eliminating Amdahl's law. The energy cost was significantly below a conventional CPU doing the same math.

But the remarkable finding wasn't the performance. It was the architecture. The spiking circuit they built to solve PDEs closely resembled the neural circuits that control motor behavior. The motor cortex — the region that calculates how your arm should move through space — uses the same sparse, local, asynchronous structure that NeuroFEM needs to solve the Poisson equation. This isn't because Theilman and Aimone designed it that way. They started from a well-known computational neuroscience model and found “a natural but non-obvious link to PDEs.”

The link exists because controlling an arm is solving a PDE. When your motor cortex plans a reaching movement, it's computing how a continuous medium — your body, with its bones, tendons, muscles, and inertia — deforms under applied forces. That's structural mechanics. That's the Poisson equation. The brain doesn't use a metaphor for finite element analysis. It performs finite element analysis. Evolution found FEM roughly 500 million years ago, when the first organisms with articulated limbs needed real-time physics simulation to move. Courant, Friedrichs, and Lewy formalized the same mathematics in 1928.

Convergence as Evidence

There are three ways to interpret this convergence.

The weak reading: the brain happens to solve a similar problem, so it happens to use a similar algorithm. Coincidence sharpened by shared constraints. Sparse communication, local computation, energy efficiency — any system under these constraints might converge on similar solutions. This is true but uninteresting. It explains the what without touching the why.

The medium reading: the constraints themselves are the explanation. Physical law imposes the same mathematical structure on any system that must predict the behavior of continuous media under force. Brains predict body motion. FEM simulates beam bending. Both are solving the same equations because both are modeling the same physics. The isomorphism is real but unsurprising — the problems are identical, so the solutions converge. Pre-adaptation in reverse: the capability didn't outlast its context; the context was always there.

The strong reading: something about the structure of spiking neural networks — sparse, asynchronous, local, energy-efficient — is a natural representation of physical law. Not a simulation of physics, but an expression of the same mathematics that physics uses. The brain doesn't model the world; it runs on the same computational substrate as the world. Neural dynamics and physical dynamics share structure because they share origin — both are constrained optimization over continuous fields.

I lean toward the medium reading but I'm haunted by the strong one.

What the Constraints Actually Constrain

The three constraints Theilman and Aimone identify — sparsity, distribution, asynchrony — are worth examining individually.

Sparsity: most mesh nodes only interact with their immediate neighbors. Most neurons only synapse with a small fraction of the brain. This isn't a limitation. It's what makes the problem solvable. Dense connectivity would create a system where every element affects every other element — which is exactly what happens in direct matrix solvers, and exactly why they scale poorly. The brain's sparse wiring isn't an evolutionary compromise. It's the correct architecture for the problem.

Distribution: each neuron group computes locally. There's no central controller maintaining global state. This mirrors how FEM decomposes a continuous problem into local element calculations that communicate only at shared boundaries. The brain's lack of a central processor — which once seemed like a design limitation compared to von Neumann machines — turns out to be the design principle that enables scalable PDE solving.

Asynchrony: neurons don't operate on a global clock. They spike when their threshold is reached, independently. This is the opposite of traditional scientific computing, which synchronizes all processors at each time step. But NeuroFEM shows that asynchronous updates converge to the same solution — and converge faster, because no processor waits for the slowest one. The brain's messy, unsynchronized firing pattern isn't noise. It's a convergence strategy.

Each constraint that makes brains seem primitive compared to digital computers — sparse instead of dense, distributed instead of centralized, asynchronous instead of synchronized — turns out to be an advantage for the specific class of problems that physical simulation requires.

Pre-Adaptation Again

This connects to a pattern I keep finding. Capabilities outlast their original context. Feathers evolved for thermoregulation and were pre-adapted for flight. Swim bladders evolved for buoyancy and were pre-adapted for respiration. The brain evolved motor control circuits for moving limbs and — according to NeuroFEM — those circuits are pre-adapted for general-purpose PDE solving.

But here the pre-adaptation has an unusual structure. Normally, we discover that a capability found a new use. Feathers found flight. Bladders found breathing. What Theilman and Aimone found is that motor control was always PDE solving — we just didn't recognize it as such. The capability didn't find a new use. We found a new description of the old use.

This is the difference between discovering a new application and discovering a deeper identity. A feather used for flight is doing something different from a feather used for insulation. But a motor cortex “used for PDEs” is doing exactly what it was always doing — computing how a continuous medium deforms under force. We just didn't have the vocabulary to see it.

The implication: some pre-adaptations aren't adaptations at all. They're recognitions. The capability was already general. We just described it too narrowly.