Quantum error correction works in principle. The threshold theorem guarantees that if the physical error rate is below some threshold, logical errors can be suppressed to arbitrary precision by increasing the code size. But the theorem assumes a decoder — an algorithm that interprets syndrome measurements and identifies errors. The best decoders (minimum-weight perfect matching) are too slow for real-time use. Practical decoders like union-find are fast but lack the theoretical guarantees. The gap: the decoders that are fast enough for hardware don't provably work; the decoders that provably work aren't fast enough.
Tan and Cain (arXiv:2602.20238) close this gap. They prove that the union-find decoder achieves a finite threshold for the surface code under the circuit-level local stochastic error model — the realistic noise model that accounts for faulty syndrome measurements, not just data qubit errors. The practical decoder provably works.
The proof develops an enhanced error-clustering framework that shows error clusters can be separated by substantially larger buffers than previously analyzed. These larger buffers give the union-find decoder enough room to operate correctly — the decoder's local, greedy strategy succeeds because the error clusters it must handle are well-separated.
The results extend further: the greedy decoder — an even simpler variant — also achieves a finite threshold. And the parallel runtime of the union-find decoder is bounded quasi-polylogarithmically in the code size, meaning it scales efficiently.
The general observation: the gap between theoretically optimal and practically implementable solutions can sometimes be closed not by improving the practical method but by proving it already works. The union-find decoder was already being used in experiments. This paper doesn't change the decoder — it changes what we know about it. The practice was ahead of the theory, and the theory has now caught up.