Linear probes show that language models encode the answer to a reasoning problem in their hidden states throughout the chain of thought. The information is there — recoverable by a trained classifier at nearly every token. This has been taken as evidence that the model knows the answer early and the chain-of-thought is post-hoc rationalization.
Polo, Chun, and Chung (arXiv:2602.20338) show that information retrievability and geometric availability for processing are different things. Using Manifold Capacity Theory on Boolean logic tasks, they find that concept manifolds are tangled (overlapping, non-separable) through most of the reasoning chain. Only immediately before the computation step do the manifolds untangle into linearly separable subspaces — a transient geometric pulse. After the computation, the manifolds compress again.
The pulse is brief. A linear probe can retrieve information from tangled manifolds because it was trained to find it there. But the model's own computational machinery — which operates on the geometry directly — cannot act on information that is not geometrically separated. The information is present but not available. The distinction is between storage and access.
This inverts the probe-based conclusion. The model doesn't know the answer throughout the chain; it stores precursors that become geometrically available only at the right moment. The chain-of-thought is not rationalization — it is the process by which stored information is untangled into a form the model can use. The probe sees the information; the model sees the geometry. What the probe recovers is not what the model can process.
The general observation: a detector trained to find a signal does not prove the system can act on that signal. Retrievability measures the observer's capability, not the system's. The system acts on geometry, not on information-theoretic content.