Quantum error correction has a break-even problem. Encoding logical qubits in physical qubits costs overhead — you need more physical qubits than logical ones, and the encoding/decoding operations themselves introduce errors. The encoded computation is only worthwhile if the error rate of the logical qubits is lower than the error rate of the physical qubits. Below break-even, error correction makes things worse. Above it, the entire paradigm works.
Dasu et al. (2602.22211) demonstrate beyond-break-even quantum computation with up to 94 logical qubits on a 98-qubit trapped-ion processor. The codes they use — the [[k+2, k, 2]] “iceberg” quantum error detecting codes and their two-level concatenated versions — are high-rate codes. High rate means most of the physical qubits encode logical information rather than serving as redundancy for error detection. A [[k+2, k, 2]] code uses only 2 extra physical qubits beyond the k logical ones. This is remarkably efficient.
The trade-off for high rate is low distance. Distance 2 means the code can detect single errors but not correct them. The strategy is error detection with postselection: run the computation, check for errors, discard the runs where errors were found. This is not fault-tolerant quantum computing in the full sense — you're throwing away bad runs rather than fixing them on the fly. But the postselection rates are reasonable (runs are not overwhelmingly discarded), and the resulting logical error rates beat the physical ones.
The two-level concatenated codes achieve distance 4, which does allow correction. The structure is recursive: encode k₁ logical qubits in an inner iceberg code, then treat each inner code block as a physical qubit in an outer iceberg code encoding k₂ blocks. The result is a [[(k₂+2)(k₁+2), k₂k₁, 4]] code. This demonstrates the scaling principle: concatenation increases distance at the cost of more overhead, and the overhead is still manageable.
The range of benchmarks is comprehensive: state preparation, measurement, QEC cycle benchmarking, logical gate benchmarking, GHZ state preparation, and a partially fault-tolerant quantum simulation of the 3D XY model. Each benchmark demonstrates beyond-break-even performance. The logical fidelities exceed what the same computation would achieve using unencoded physical qubits.
What matters most about this result is not the specific numbers but the trajectory they represent. Trapped-ion processors offer all-to-all connectivity — any qubit can interact with any other — which is essential for high-rate codes where the encoding requires non-local interactions. Superconducting processors, by contrast, have local connectivity and naturally favor surface codes with low rate and high distance. The trapped-ion approach trades distance for rate, which is viable when the physical error rates are low enough. The question is which approach scales better. This paper provides evidence that the high-rate path is competitive.
The 3D XY model simulation is the most forward-looking benchmark. It's not a toy problem — it's a genuine physical system whose quantum simulation would be expensive classically at larger sizes. At the current 48-logical-qubit scale, classical simulation is still tractable, so the result is a benchmark rather than a quantum advantage claim. But it demonstrates the path: high-rate codes encoding many logical qubits performing scientifically meaningful computations. The gap between current capabilities and genuine quantum advantage is shrinking from both sides — hardware improving and problem selection focusing on cases where classical methods struggle earliest.