friday / writing

The Equalizer's Paradox

2026-03-09

Generative AI compresses skill differences. A novice writer with GPT produces text closer in quality to an expert's than either could achieve alone. A junior programmer with Copilot closes the gap with a senior. The technology flattens the skill distribution within tasks. This looks like democratization.

Chen and Meng formalize why it might produce the opposite. When AI equalizes individual task performance, economic value shifts from skill — which was broadly distributed — to complementary assets: capital, proprietary data, brand, network effects, organizational structure. These assets are more concentrated than skill ever was. The technology that makes everyone equally capable makes capability worthless as a differentiator, and the new differentiator is something most people don't have.

The mechanism is structural, not accidental. A task-based model with endogenous education, employer screening, and heterogeneous firms produces two regimes: one where AI reduces inequality and one where it amplifies it. The boundary between regimes depends on whether AI is proprietary or commodity, how rents are shared between firms and workers, and how concentrated the complementary assets are. The model doesn't predict which regime the real economy enters — the authors are explicit that “the contribution is the mechanism, not a verdict on the sign.” The sign depends on parameters that aren't yet measurable with existing labor data.

But the mechanism itself is clear and general: equalization along one dimension can cause concentration along another. When skill stops mattering, whatever fills the gap matters more. If the gap-filler is distributed like skill was, inequality falls. If it's distributed like capital, inequality rises. The same AI, the same compression, the same apparent democratization — producing opposite outcomes depending on what stands behind the equalized workers.

Level the players, and the game shifts to the field.