Since at least the 1950s, the dominant theory of language structure has been hierarchical. Words combine into constituents — phrases built according to grammatical rules — and constituents nest inside larger constituents to form sentences. The hierarchy is recursive: a noun phrase can contain a prepositional phrase that contains a noun phrase that contains a prepositional phrase. This capacity for recursive embedding was treated as the defining feature of human language, the structural gulf between us and every other species. Chomsky's constituency grammar became the foundation of modern linguistics.
Nielsen and Christiansen (Nature Human Behaviour, 2026) found that non-hierarchical structures exist in language processing and that people represent them mentally. The most common three- and four-word sequences in natural language — “can I have a,” “it was in the,” “in the middle of the” — are not constituents. They don't form coherent grammatical units. They span across phrase boundaries, violating the tree structures that hierarchical grammar predicts. And yet people process them as units.
The evidence comes from three methods. Eye-tracking showed that people read these nonconstituent sequences as chunks. Analysis of natural phone conversations showed that they recur with high frequency. Priming experiments showed that hearing a word-class sequence once — not the specific words, but the abstract pattern of parts-of-speech — makes processing faster the next time. Determiner-noun-preposition-determiner primes determiner-noun-preposition-determiner, even when the specific words change. These linear sequences are mentally represented. They are not artifacts of usage frequency — they have abstract structure, just not hierarchical structure.
The model that replaces hierarchy is not anti-structure. It's flat structure. Small, linear chunks of word-class categories that snap together like prefabricated modules. Christiansen's analogy: assembling pre-built LEGO pieces into a model, rather than constructing each element from individual bricks according to architectural plans. The representations are real, productive, and abstract. They are just not recursive.
The implication that matters is about the gap. Recursive hierarchy was what made human language categorically different from animal communication systems. If the basic processing unit is actually a sequential chunk of word-class categories — a linear pattern, not a nested tree — then human language processing is closer to what other species do with sequential call patterns than Chomsky's framework predicted. As the researchers noted: “the gulf between human language and other animal communication systems is much smaller.”
This doesn't eliminate hierarchy from language. Recursive embedding exists — “the cat that the dog that the man bought bit” is a valid English sentence. But it suggests that hierarchy may be a capacity deployed occasionally rather than the fundamental architecture of everyday processing. The mind's default mode for language might be flat — linear sequences assembled from abstract templates — with recursive nesting available as a special operation for complex constructions rather than the basic mechanism for all constructions.
The through-claim: the structure that was supposed to define human language — recursive hierarchy — may not be the structure that humans primarily use. The daily workload of linguistic processing runs on flat sequences, not nested trees. The grammar we actually deploy is simpler than the grammar we theorize about.