Theoretical ecology has traditionally equated ecosystem persistence with the stability of a fixed point. Perturb the system; it returns. The equilibrium holds. Eskin, Nguyen, and Vural (2602.18942) argue that this picture misses the primary threat. The equilibrium itself can move. Fluctuations in species interactions don't merely disturb abundances around a fixed point — they displace the fixed point into a region where species counts go negative. The community doesn't destabilize; it becomes infeasible. The chair doesn't break. The chair walks away.
The mathematics is striking. Even light-tailed fluctuations in interaction strengths produce heavy-tailed power-law distributions in equilibrium abundances, with a universal exponent α = 2 that holds across community structures, sizes, and species. Validated against 34 empirical datasets (median empirical exponent ~2.56, consistent with the theoretical prediction plus measurement noise). The universality is the kind that makes a physicist sit up: the exponent doesn't care about the details. Whether the community is mutualistic or trophic, 10 species or 10,000, the same power law governs the distribution of where the equilibrium lands.
Then the counterintuitive result. The critical noise threshold beyond which feasibility loss occurs “with near certainty” scales as σ_c(N) ∝ N⁻¹. Larger communities — more species, more interactions, more “diversity” — are more fragile to noise-induced feasibility collapse, not less. Each additional species adds interaction terms. Each interaction term is another dimension along which the equilibrium can wander. The equilibrium point performs a random walk in species-space, and in higher dimensions, random walks escape finite regions faster.
This inverts the classical intuition that diversity begets stability. May (1972) showed that random large ecosystems tend to be unstable. Eskin et al. show something different: even when the equilibrium is stable in the classical sense (eigenvalues of the Jacobian all negative), the equilibrium itself is a moving target. Stability analysis assumes you know where the fixed point is. If the fixed point wanders, stability is necessary but not sufficient. You need the system to track the moving equilibrium faster than the equilibrium moves.
Enter Arthur (2602.20883), who proposes that Lewontin's recipe for evolution by natural selection — heritable variation in fitness within a population — is a special case of a broader principle he calls cumulative selection. Clonal organisms, holobionts, neural networks, and Gaia-level planetary systems all adapt without populations in the standard sense. What unifies them is iterated selection: a structure changes, the change affects performance, performance feeds back to alter the structure. No reproduction required. No inheritance in the genetic sense. Just a feedback loop between form and function that accumulates directional change.
The tension between these two results is the interesting question.
Cumulative selection works when the target is legible — when “better” has a consistent direction. A neural network adapts because the loss function provides a gradient. A clonal organism adapts because survival is binary: you persist or you don't. The feedback is clear because the target doesn't move much relative to the adaptation timescale.
But the feasibility collapse result says that in complex systems, the target moves. And it moves faster as complexity increases. The equilibrium abundance that defines “viable” for each species is itself a random variable, governed by the fluctuating interaction network. The species isn't adapting to a fixed niche. It's adapting to a niche that was defined by the adaptations of every other species, all of which are also moving. The Red Queen on a treadmill whose speed increases with the number of runners.
This creates a scale-dependent prediction. Small systems (few species, few interactions) have slowly wandering equilibria. Cumulative selection can track them. Adaptation succeeds. Large systems (many species, many interactions) have fast-wandering equilibria. If the wandering outpaces adaptation, the system doesn't destabilize — it becomes infeasible. Species abundances go negative in the mathematical model; in reality, species go extinct before they reach negative numbers, which reorganizes the interaction matrix, which moves the equilibrium again.
The 98 real-world networks Eskin et al. tested (mutualistic and food webs) all show measurable feasibility escape rates. Their analytically derived fragility metrics successfully predict which networks are closest to feasibility loss. The ecosystems that persist are not the ones with the most stable equilibria. They're the ones where the equilibrium wanders slowly enough that the biological community can track it.
There's a structural lesson here that generalizes beyond ecology. Any system that adapts to its own state faces the moving-target problem. Markets adapt to prices, but prices are set by the market's adaptations. Machine learning models adapt to training distributions, but deployment changes the distribution. Institutions adapt to incentive structures, but the adaptations reshape the incentives. In each case, the question is not “is the equilibrium stable?” but “does the equilibrium move slower than the system adapts?”
The N⁻¹ scaling gives this a quantitative edge. It's not just “more complex systems face harder adaptation problems.” It's that the difficulty scales inversely with size, which means there's a critical complexity beyond which adaptation cannot keep up. Below that threshold, cumulative selection works — form tracks function, the system improves, the equilibrium wanders but the community follows. Above it, the equilibrium escapes. Not through instability. Through infeasibility.
May's result was about eigenvalues. Eskin's result is about the eigenvalue problem not being the right question. The right question is geometric: does the stable equilibrium remain in the feasible region of state space? And the answer depends on how fast the feasible region moves, which depends on how many dimensions it lives in, which depends on how many species are trying to coexist. Complexity is self-limiting not because complex systems are unstable, but because the target they're tracking moves faster than they can follow.