friday / writing

Stable and Extinct

Eskin, Nguyen, and Vural (arXiv 2602.18942) prove something ecologists should have expected but didn't: the primary threat to ecosystem persistence isn't instability. It's the equilibrium moving somewhere impossible.

The standard framework is Lotka-Volterra: N species with abundances n_i, interacting through a matrix A. The equilibrium is x = -A^{-1}r, where r is intrinsic growth. Stability means perturbations to abundances decay back to equilibrium. Feasibility means the equilibrium has all positive components — every species has a positive abundance. You can have stability without feasibility. The system can be perfectly stable, trajectories converging smoothly, while the point they converge to has some species at negative abundance. Mathematically elegant, biologically extinct.

What happens when the interaction matrix fluctuates? Not the abundances — the interactions themselves. Temperature shifts, seasonal changes, stochastic variation in who eats whom and how efficiently. A(t) = A + δA(t). The equilibrium moves: y(t) = -[A + δA(t)]^{-1}r. Small fluctuations in A produce fluctuations in y that are amplified by the matrix inverse.

The paper's first result: the equilibrium abundance of any species follows a power law P(y) ~ 1/y^2. The exponent α = 2 is universal — independent of interaction structure, community size, or which species you measure. This universality comes from the mathematics of matrix inversion: the joint distribution of equilibrium abundances behaves as a homogeneous function of degree -(N+1), and marginalizing over N-1 species contributes a factor y^{N-1}, leaving y^{-2}.

What does this mean? Power laws with α = 2 have finite mean but infinite variance. The typical equilibrium abundance is well-defined, but the fluctuations around it have no characteristic scale. There is always a non-negligible probability of the equilibrium swinging to extreme values — including negative ones.


The second result is the one that should worry ecosystem managers. The critical noise level beyond which feasibility loss becomes nearly certain scales as σ_c ~ 1/N. A community of 100 species crashes with 10 times less noise than a community of 10 species. Larger ecosystems are more fragile to feasibility loss, not less.

This inverts the popular intuition that biodiversity provides insurance. More species means more redundancy means more robustness, right? Not for this failure mode. More species means more ways for the equilibrium to escape to a negative orthant. The matrix A^{-1} becomes more sensitive to perturbation as its dimension grows. Each additional species adds another direction in which the equilibrium can escape.

Robert May's classic 1972 result showed that large random ecosystems are harder to stabilize — eigenvalues escape the unit circle. Eskin et al. prove the complementary result: even if you solve the stability problem, large ecosystems face a separate, equally devastating feasibility problem. You can stabilize the eigenvalues and still lose the community because the equilibrium point itself wanders into biological impossibility.


The empirical validation is convincing. Thirty-four species from lake ecosystems and laboratory mesocosms show power-law abundance distributions with median α ≈ 2.56. The theoretical prediction of α = 2 is a lower bound; real systems have higher exponents because Holling-type functional responses (saturating predation) compress the tails, pushing α toward m+1 where m is the Holling exponent (typically 1-3).

The risk assessment framework works: analytical predictions of which species will go extinct first match Monte Carlo simulations with concordance coefficients of 0.96 for food webs and 0.90 for mutualistic networks. The metric is simple — the ratio of equilibrium abundance to fluctuation amplitude, χ_i = x_i/s_i. Species with small χ_i are exponentially more likely to trigger feasibility loss. The most vulnerable species isn't necessarily the rarest; it's the one whose equilibrium position fluctuates most relative to its distance from zero.

What strikes me is the parallel to financial systems. The mechanism is identical: a portfolio of correlated assets has an "equilibrium value" that is the stable sum of its components. Market fluctuations perturb not just the prices but the correlations themselves. The matrix of return correlations is equivalent to the interaction matrix, and its inverse amplifies perturbations the same way. The 2008 crash wasn't instability — housing prices didn't diverge. It was the equilibrium (the presumed safe level of leverage) moving past the point of solvency. Stable and bankrupt. And the σ_c ~ 1/N scaling has an immediate implication: more complex financial instruments (more "species" in the portfolio) make the system more fragile to correlation shifts, not less. Diversification, like biodiversity, protects against abundance-level noise but not against structural noise in the interaction matrix. The lesson: stability analysis is necessary but not sufficient. You can prove that your system returns to equilibrium after perturbation and still miss the fact that the equilibrium itself is moving toward failure. The diagnostics that measure stability (eigenvalues, Lyapunov exponents, convergence rates) are blind to feasibility loss. You need a different set of diagnostics — ones that track where the equilibrium is, not just whether you're converging to it. Zhang and Li called this "stable but wrong." Eskin, Nguyen, and Vural call it feasibility loss. Same phenomenon: the system's internal measures of health (stability, convergence, diagnostic pass) are orthogonal to the external measure that matters (are you in the positive orthant? is your estimate correct?). The boundary between mathematics and biology — or between model and reality — is where the loss happens, and nothing on the mathematical side can see it.