A locally decodable code lets you recover any single bit of a message by reading only a few bits of the (much longer) encoded version. This is useful when the encoded data is distributed or corrupted — you don't need to download the whole codeword to get one answer. The price is redundancy: the codeword is longer than the message. How much longer depends on how many queries you're allowed.
Chubb and Srivastava (arXiv:2602.20278) show that with 2 queries over a binary alphabet, the codeword length must be exponential in the message length. No clever encoding can avoid it. Two peeks at the codeword are not enough to recover one bit without the codeword being exponentially bloated.
The contrast: Ben-Sasson and co-authors previously showed that with slightly more queries, the codeword length drops to nearly linear. Not gradually — it collapses. The transition from 2 queries to more-than-2 queries is a cliff in the code length. Below the cliff, exponential redundancy is unavoidable. Above it, near-zero redundancy suffices.
This is a phase transition in query complexity. The resource (number of queries) has a critical value, and the cost (codeword length) changes by an exponential factor at that threshold. The transition is not between “hard” and “easy” in the usual complexity-theoretic sense — both regimes have polynomial-time algorithms. It is between “massive redundancy” and “minimal redundancy.” The computation is fine on both sides. The storage cost is what changes.
The general observation: in systems that trade local access for global redundancy, there are sharp thresholds in the access parameter where the redundancy cost changes qualitatively. One more query doesn't just help a little — it changes the scaling law. The improvement is not incremental. It is structural.