When a measurement is close to its extremal bound, the thing being measured can't be arbitrary. The near-optimality constrains. And the constraint, analyzed carefully, reveals structure the measurement itself doesn't describe. The deficit — how far from optimal — becomes a fingerprint of the underlying geometry.
Bloom and Green (2026, arXiv:2602.16482) prove an inverse theorem for the Littlewood problem: if a set of integers has small Fourier mass, it must contain long arithmetic progressions. The Fourier mass is a global statistic — it measures how uniformly the set is distributed modulo every period. A small mass means near-uniform distribution. But near-uniformity isn't structurelessness. It's a constraint strong enough to guarantee local regularity — the arithmetic progressions are forced into existence by the global measurement being close to its minimum.
Griesmer (2026, arXiv:2602.19014) proves inverse Kneser-Jin theorems for discrete abelian groups: if the sumset A + B is close to minimal, the sets A and B must have approximate periodic structure. The sumset size is a measure of how “spread out” the addition is. When it's almost as small as it can be, the components can't be positioned arbitrarily — they must align with cosets of a subgroup. The deficit in sumset size fingerprints the periodicity of the underlying sets.
Adibelli and Tomon (2026, arXiv:2602.18316) prove that semialgebraic relations of bounded degree force Ramsey regularity with tower functions whose height depends on the polynomial degree. The degree is the deficit parameter: low-degree polynomial relations can't support high combinatorial complexity. The algebraic constraint — how far the relation is from arbitrary — controls the tower height. The deficit from maximal Ramsey numbers reveals how “algebraically simple” the coloring must be.
O'Reilly (2026, arXiv:2602.20056) gives quantitative bounds for the k-dimensional Duffin-Schaeffer conjecture. When the sum of approximation targets diverges (the condition for almost-everywhere approximation), the count of good approximations equals the expected value plus an error of O(Ψ^{1/2+ε}). The error term itself is a fingerprint: it bounds how far the actual count can deviate from prediction, and the exponent 1/2 (plus epsilon) reflects the underlying independence structure of the approximation events.
Voits and Schwarz (2026, arXiv:2602.18265) prove that first-passage time distributions in large Markovian networks converge to exactly two generic forms: exponential or delta. The exponential form is the default — it requires nothing from the network. But if you measure a first-passage distribution that deviates from exponential, the network must have specific architectural features that support the deviation. The distance from exponential is a fingerprint of network structure. The more precise the timing, the more constrained the topology.
Hernández-García and Velázquez-Castro (2026, arXiv:2602.16129) determine the critical minimum network size at which gene regulatory networks can sustain oscillations against intrinsic fluctuations. Below the threshold, noise kills the cycles. Above it, oscillation is structurally possible. The system's distance from the critical size — its deficit from the oscillation threshold — determines not just whether it oscillates, but how robustly. The gap from the critical point fingerprints the system's capacity for periodic behavior.
Six results, six fields: additive combinatorics, Diophantine approximation, algebraic Ramsey theory, stochastic networks, molecular biology. The common structure: when a global measurement is near its extremal value, the underlying object must have specific local properties. The measurement doesn't describe those properties. The deficit does. This is the inverse of the usual scientific program. Usually we describe structure and predict measurements. Here we measure deficits and prove structure. The object is identified not by what it has, but by how close it is to having the least (or most) it could. The deepest version may be Voits-Schwarz. The exponential distribution requires no explanation — it is what happens when nothing special happens. Every deviation from exponential is a structural claim. The deficit from maximal entropy is, literally, information. The smaller the gap from generic behavior, the less you can say. The larger the gap, the more the network has told you about itself.