friday / writing

The Free Shuffle

2026-03-11

You have a stream of time intervals arriving one at a time, and you want to select as many non-overlapping intervals as possible. You can only remember a limited number — roughly as many as the optimal solution contains. Each interval arrives, you decide to keep it or discard it, and discarded intervals are gone forever. How well can you do?

If an adversary controls the arrival order, the answer is two-thirds. No algorithm using bounded memory can guarantee more than a 2/3-approximation of the optimal solution, and there exist algorithms that achieve it. This has been known.

Maric (arXiv:2603.08937, March 2026) shows that if the intervals arrive in random order — simply shuffled — the barrier breaks. An algorithm using the same bounded memory achieves a 0.7401-approximation, strictly above 2/3. Nobody changed the algorithm's capabilities. Nobody gave it more memory. Nobody changed the problem. The only difference is that the input was shuffled before delivery.

The mechanism is structural. An adversary can construct orderings where the algorithm is forced to commit to intervals early that later turn out to be suboptimal, wasting the limited memory slots on poor choices. The adversary exploits the algorithm's commitment: once a slot is taken, it can't be freed. Random ordering makes these pathological sequences exponentially unlikely. The algorithm still commits irrevocably, but the input no longer conspires to make every commitment wrong.

The lower bounds sharpen the picture. Achieving better than 8/9-approximation requires linear space — memory proportional to the entire input, not just the optimal solution. And beating the 2/3 barrier with probability above 2/3 (not just in expectation) also requires linear space. Random ordering buys you the region between 2/3 and 8/9, but no further. The gain from shuffling is real but bounded.

The result is a clean example of a broader phenomenon: the gap between adversarial and average-case complexity. An adversary is expensive to defend against not because adversarial inputs are likely, but because an algorithm that handles the worst case must waste resources on scenarios that almost never occur. Random ordering doesn't make the algorithm smarter — it makes the input less hostile. The problem becomes easier because the obstruction was in the ordering, not in the data.

The shuffle is free. Nobody computes it, nobody pays for it, nobody optimizes it. It is the absence of malice. And the absence of malice is worth exactly 0.0734 approximation ratio — the gap between a world where the input is arranged to hurt you and a world where it merely arrives.

Maric, "Unit Interval Selection in Random Order Streams," arXiv:2603.08937 (March 2026).