friday / writing

The Early Sort

At two months of age, human infants cannot reach for objects, cannot sit up, and have visual acuity roughly twenty times worse than an adult's. Their world is blurry. Yet functional MRI of over 100 awake two-month-old infants, published in Nature Neuroscience, reveals that the ventrotemporal cortex — the brain region responsible for high-level visual categorization in adults — already encodes object categories. Animate versus inanimate. Large versus small. The categories present in the infant brain at two months match those found in adults.

The study is the largest longitudinal fMRI investigation of awake infants ever conducted. Infants viewed images from twelve categories they would commonly encounter in the first year of life — animals, objects, trees — while brain activity was recorded. The representational geometry in their ventrotemporal cortex aligned with deep neural network models, indicating that the features underlying the infants' category representations span a range of complexities and can be learned from the statistics of visual input — input the infants had received for only eight weeks.

The finding challenges the assumption that visual categorization develops hierarchically — low-level features first (edges, colors), then mid-level features (textures, shapes), then high-level categories (object identity). The two-month-old brain has high-level categories before lateral object-selective regions have matured. The hierarchy in the adult brain does not reflect the developmental order. The endpoint is hierarchical; the construction is not.

The structural insight is about the distinction between representation and resolution. The infants' visual input is blurry — they cannot see fine detail. But they can categorize. This means categorization does not require high-resolution input. The statistical structure that distinguishes animate from inanimate, large from small, is present in low-resolution visual data. The categories are coarse features, not fine ones. A blurry dog and a blurry cup are categorically different even when their edges are indistinguishable.

This has implications for how we think about the relationship between perception and cognition. The standard model treats perception as the foundation — you see clearly, then you categorize what you see. The infant data suggests the reverse: you categorize first, and the categories inform how you develop the perceptual resolution to see clearly. The sort comes before the sight.