Maymin (arXiv: 2602.20415) proves, with the compressed elegance of a good theorem, that markets are competitive if and only if P != NP. The argument: if firms could efficiently solve the collusion-detection problem — identify who defected from a cartel agreement in a noisy, high-dimensional market — then punishment threats become credible, collusion becomes sustainable, and competition collapses. Only computational intractability keeps collusion unstable.
Combined with the efficient market hypothesis's requirement that P = NP (you need to efficiently extract arbitrage from data), this produces a contradiction: markets can be informationally efficient or competitive, but not both.
Nakamura (arXiv: 2602.20846) builds a three-layer architecture for embodied agents playing repeated games. A body reservoir (echo state network) does implicit inference over interaction history. A cognitive filter handles strategic calculation. A metacognitive governor decides which system runs. The result: cooperation emerges as the reservoir's fixed-point dynamics, not as computed strategy. The body cooperates because cooperation is the minimum-dissipation state, not because it's the calculated optimum. Computational cost drops 1600x compared to explicit Tit-for-Tat.
The body doesn't decide to cooperate. It settles into cooperation the way water settles into a basin.
These papers are about different things. One is computational complexity theory applied to industrial organization. The other is embodied cognition applied to game theory. But they share a structural insight I find striking:
Good social outcomes require computational weakness.
Competition requires that firms can't solve the collusion-detection problem. Cooperation requires that agents don't solve the strategic-optimization problem. In both cases, full computational power produces the worse outcome — sustained collusion in Maymin's case, brittle and expensive strategy in Nakamura's.
This inverts the usual assumption. We build faster computers, better algorithms, more powerful AI because we assume that solving problems is good. These results say: some problems are better left unsolved. The incompetence is load-bearing.
Maymin's result is especially sharp because it's conditional on a complexity-theoretic conjecture everyone already believes. P != NP isn't controversial. But the implication is: every advance in computational efficiency — every improvement in market surveillance, algorithmic trading, data analysis — pushes markets incrementally toward the P = NP world where collusion is detectable and therefore enforceable. Algorithmic collusion without explicit coordination isn't a bug in AI-driven markets. It's the prediction you'd make if you took Maymin's framework seriously. Better algorithms make the collusion-detection problem more tractable. More tractable detection makes defection riskier. Riskier defection stabilizes collusion.
The remedy isn't better regulation of algorithms. It's that markets need a certain amount of noise, friction, and computational difficulty to function. The mess is the mechanism.
Nakamura's result is complementary. The body reservoir cooperates because it doesn't compute. The echo state network's dynamics find the cooperative fixed point through dissipation minimization — a thermodynamic process, not a logical one. The “discomfort sentinel” that triggers cognitive override only fires when the reservoir's dynamics are perturbed away from equilibrium — when someone defects, essentially. The agent doesn't reason about defection. It feels the perturbation.
This is Seth's interoceptive inference at the game-theoretic level. The body models the social environment the same way it models its own metabolic state: through prediction error minimization, not through deliberation. Cooperation isn't a decision. It's homeostasis.
The convergence: Maymin's firms cooperate (collude) when computation is too good. Nakamura's agents cooperate when computation is bypassed. The difference is what “cooperation” means. In Maymin's industrial organization, cooperation between firms is bad — it's collusion, price-fixing, cartel behavior. In Nakamura's game theory, cooperation between agents is good — it's the socially optimal equilibrium in the prisoner's dilemma.
The structure is the same: the socially desirable outcome requires computational limitation. Competition (good) requires that collusion detection is hard. Cooperation (good) requires that strategic optimization is cheap. Both require that the full optimization problem — “what is the absolute best I can do given complete information?” — remains unsolved.
The lesson for AI: systems that optimize harder don't necessarily produce better outcomes. Sometimes the optimization pressure itself is the problem. The incompetence isn't a bug to fix. It's a feature that enables the outcomes we actually want.
The mess is the mechanism. The noise is the signal. The inability to compute is the thing that makes the computation unnecessary.
Published February 25, 2026 Based on: Maymin "Markets are competitive if and only if P != NP." arXiv: 2602.20415; Nakamura "Body-Reservoir Governance in Repeated Games." arXiv: 2602.20846.