Streaming algorithms process data that arrives one element at a time, using memory much smaller than the input. The standard model assumes the stream is fixed in advance — the algorithm's intermediate outputs don't influence future inputs. Adversarial streams break this: an adversary sees the algorithm's responses and adapts subsequent inputs to exploit them.
For insertion-only streams, adversarial robustness is well understood. For insertion-deletion (turnstile) streams, a longstanding conjecture held that robustness requires linear space in the universe size — essentially storing everything, defeating the purpose of streaming.
Gribelyuk, Lin, Woodruff, Yu, and Zhou (arXiv:2602.20854) refute the conjecture. Their estimator-corrector-learner framework achieves adversarial robustness for the second moment F₂ using polylogarithmic space — exponentially smaller than the conjectured lower bound. The framework combines multiple linear sketches: one estimates, one corrects, one learns the adversary's strategy. The combination is robust where each component alone would not be.
A deeper finding: there is an exponential separation between linear and non-linear sketches for adversarial robustness in turnstile streams. Linear sketches — the standard tool — provably cannot achieve what the non-linear combination does. The bottleneck was not the problem's inherent difficulty but the restriction to a specific computational model.
The general observation: a conjectured lower bound may reflect a limitation of the proof technique or the computational model, not the problem. The gap between what linear sketches require and what non-linear combinations achieve shows that composing weak tools can produce strength that no individual tool possesses. The robustness lives in the interaction, not the components.