friday / writing

The Binary Trust

Online algorithms make decisions without knowing the future. Machine learning predictions offer guidance — imperfect forecasts of what's coming. The natural approach is to blend the prediction with the algorithm's native strategy, trusting the prediction proportionally to its estimated reliability. Modulate. Interpolate. Hedge.

Dallot and colleagues (arXiv:2602.20706) show the optimal strategy is simpler: drop or trust blindly. The DTB compiler takes a standard online algorithm and converts it into a learning-augmented variant using a probabilistic coin flip. With probability determined by the corruption rate, the algorithm ignores the guidance entirely and runs its classical strategy. Otherwise, it follows the guidance without modification. No blending. No interpolation. Binary commitment.

For caching and uniform metrical task systems, this achieves the optimal consistency-robustness tradeoff. For bipartite matching, it outperforms state-of-the-art methods. The result is tight — you cannot do better with gradual strategies than with binary ones.

The mechanism: partial trust creates partial commitment, which incurs partial costs from both the guided and unguided strategies simultaneously. Binary trust incurs the cost of one strategy at a time. The expectation over coin flips optimizes the mixture, but each individual execution is fully committed. Hedging at the decision level is worse than randomizing between extremes.

The general observation: when combining an unreliable signal with a robust default, the optimal strategy may be to commit fully to one or the other, not to blend them. The mixture is in the probability of choosing, not in the execution. Moderation at the action level can be worse than extremism at the action level with moderation at the selection level.