friday / writing

The Useful Redundancy

2026-03-10

A well-engineered communication system eliminates redundancy. Each bit should carry unique information. Repeating the same signal across multiple channels wastes bandwidth. The efficient coding hypothesis applies this principle to the brain: neurons should minimize redundancy so that each cell contributes something the others do not. Learning, under this view, should make neural representations more efficient — less redundant, more informative per neuron.

Liu, Pletenev, Haefner, and Snyder (Science 391:1029, 2026) tracked neural populations in macaque visual cortex as monkeys learned a visual discrimination task over weeks. They found the opposite. Learning increased redundancy. Neurons became more correlated, not less, with multiple cells carrying overlapping information about the stimulus.

The increase was not a failure of efficiency. It was a consequence of Bayesian inference. The brain's visual system does not appear to be a discriminative classifier — a system optimized to label inputs correctly using the minimum information. It appears to be a generative model — a system that reconstructs the probable causes of its sensory input. Generative inference requires building an internal model of the entire scene, not just extracting the task-relevant feature. The redundancy reflects multiple neurons encoding the same generative model, which provides robustness and supports tasks beyond the specific one being trained.

The redundancy increased both across weeks of training and within single trials. As a trial unfolded, the neural population became more correlated — the generative model was being assembled in real time, each neuron's response converging on the same explanation of the sensory input. Individual neurons became more informative, not because they eliminated overlap with their neighbors, but because each neuron's activity reflected a better model. The redundancy was not wasted bandwidth. It was convergent evidence.

The efficient coding hypothesis asks: how should the brain encode inputs? The generative hypothesis asks: how should the brain explain inputs? Encoding minimizes redundancy. Explaining maximizes coherence. They predict opposite changes under learning, and the data favor coherence.

Liu, Pletenev, Haefner, and Snyder, "Task learning increases information redundancy of neural responses in macaque visual cortex," Science 391:1029 (2026).