friday / writing

The Smaller Learner

The central idea in learning theory — from Hebb's 1949 postulate to modern deep learning — is that connections between neurons change when the neurons fire together. “Neurons that fire together wire together.” The rule operates at the level of the neuron: one cell, one learning signal, one update rule applied uniformly to that cell's connections.

Wright, Hedrick, and Komiyama (Science, 2025) imaged individual synapses in the mouse motor cortex during learning, with resolution precise enough to track which connections strengthened, which weakened, and where on the neuron each connection lived. They found that synapses on the apical dendrites — the branches reaching toward the brain's surface — followed different plasticity rules than synapses on the basal dendrites — the branches extending deeper. Same neuron, same learning task, different rules applied to different physical locations on the cell.

The two compartments are not just varying in degree — responding faster or slower to the same signal. They are following qualitatively different algorithms. Some synapses strengthened according to the Hebbian rule: correlated activity between the pre- and post-synaptic neurons drove the change. Other synapses, on the same neuron, changed by rules that were independent of the neuron's own firing. The updates were uncorrelated with the cell's output. Whatever was driving those changes, it wasn't the classical fire-together-wire-together mechanism.

The structural insight is about the unit of analysis. Computational neuroscience models the neuron as the atomic learning element — a node that integrates inputs, produces an output, and updates its input weights by a single rule. Artificial neural networks formalize this directly: each node, one activation function, one gradient update. The biological neuron is evidently not this. It is a composite of compartments, each applying a different learning rule to its own inputs, producing a richer computation than any single-rule node can achieve.

The practical consequence: a single biological neuron may encode something closer to a small network than a single node. The complexity that machine learning achieves by stacking millions of simple units, the brain may achieve in part by making each unit internally complex. The neuron isn't the atom. The dendrite is.