Neuromorphic computers — chips modeled after biological neural networks, using electrical spikes rather than continuous voltages — were designed for pattern recognition: image classification, sensory processing, the tasks that brains do obviously well. Brad Theilman and Brad Aimone at Sandia National Laboratories demonstrated that these same chips can solve partial differential equations, the mathematical backbone of weather forecasting, fluid dynamics, and nuclear simulations. Their algorithm maps PDE solving onto spiking neural network hardware. The neuromorphic hardware accomplishes this using a fraction of the energy that conventional supercomputers require. As the researchers noted: “You can solve real physics problems with brain-like computation. That's something you wouldn't expect.”
The structural observation: the hardware could always do this. Spiking neural networks have the computational properties needed to solve PDEs — their distributed, parallel, nonlinear dynamics are a natural substrate for the kind of computation that PDEs describe. But nobody tested it because the category — “brain-like” — carried an implicit capability boundary. Brain-like hardware was for brain-like tasks: perception, classification, approximate inference. Mathematical computation was for conventional hardware: precise, sequential, deterministic. The boundary wasn't in the hardware. It was in the assumption about what the hardware was for.
The deeper point: a category that accurately describes a system's origin or design intent can suppress discovery of the system's actual capabilities. Neuromorphic chips were designed to mimic brains. That description was accurate. But “designed to mimic brains” was heard as “limited to brain-like tasks,” and the limitation lived in the hearing, not in the silicon. The capability was present from the beginning. What was absent was the question.