Languages lose distinctions. Voicing disappears at the end of German words — “Rad” (wheel) and “Rat” (advice) are spelled differently but pronounced identically. Vowel contrasts collapse at the end of Catalan words. Tone distinctions flatten at the end of Mandarin compounds. The technical term is phonological neutralization: a contrast that exists in one position disappears in another. Two sounds that are distinct in the middle of a word become identical at its edge.
Adam Ussishkin and colleagues examined all active neutralization rules in fifty areally and genetically diverse languages, identifying every rule that targets the edge of a lexical domain — root, stem, word, phrase, or utterance. The finding: neutralization overwhelmingly targets endings over beginnings. Across all fifty languages, rules that blur distinctions at the end of a domain vastly outnumber rules that blur distinctions at the beginning. The asymmetry is not an artifact of syllable structure (codas being more vulnerable than onsets) or morphological preference (suffixing versus prefixing). It holds independently of both.
The proposed explanation draws on how listeners identify words. When you hear the beginning of a word, many candidates are still possible. “Ca-” could become “cat,” “cab,” “cap,” “can,” “cage,” “cake,” and hundreds of others. Each subsequent sound eliminates candidates. By the time you reach the final sound, the word has usually already been identified — the ending is confirmatory rather than discriminative. The information content of phonological categories is highest at the beginning, where each sound does the most work narrowing the candidate set, and lowest at the end, where the identification is already complete.
Neutralization follows this information gradient. Languages tolerate ambiguity where it costs least — at the position where the listener has already identified the word. They preserve distinctions where ambiguity would be most disruptive — at the position where identification is still in progress. The system's error budget is allocated by information density. The expendable positions are the ones that carry the least identification load.
This is not a conscious design. No speaker decides to blur word-final distinctions. The asymmetry emerges from thousands of years of incremental sound change filtered by communicative success. Changes that blur high-information positions are more likely to cause misunderstanding and be selected against. Changes that blur low-information positions pass through the filter because they cost the listener almost nothing. The result is a universal statistical bias: languages converge on protecting their beginnings and spending their endings.
The structural observation extends beyond phonology. In any sequential signal processed incrementally — a word, a genome, a message, a navigation route — the information content is not uniform across positions. Early positions constrain interpretation more than late ones because they set the context within which later positions are read. Systems that allocate precision according to this gradient — preserving early distinctions, allowing late ones to blur — are not wasting the ending. They are budgeting accuracy where it produces the most return.