Chilson and Schwitzgebel (arXiv:2602.04986) argue that AI is “strange intelligence” — superhuman in some domains, subhuman in others, sometimes within the same task. They challenge the linear model: the assumption that intelligence sits on a single scale where higher means better at everything. AI breaks this model because its capability profile is jagged rather than smooth. An LLM can explain the ironies of Hamlet while failing arithmetic that any child handles. This isn't a glitch to be fixed. It's evidence that the linear model was wrong.
The move from scalar to profile has a clean mathematical precedent. In error-correcting codes, minimum distance — a single number — characterizes robustness against errors. But for real-valued computation, where perturbations have magnitudes, minimum distance is insufficient. Ravagnani, Rini, and Wachter-Zeh (arXiv:2602.20366) replace it with the height profile: a function mapping each position to the threshold where tolerable noise becomes detectable error. Different positions have different thresholds. The scalar hides what the profile reveals.
Intelligence has the same structure. Biological intelligences evolved in the same environment, facing the same selection pressures, so their capability profiles are roughly correlated — spatial reasoning tracks with tool use, social cognition tracks with language, memory tracks with planning. This correlation is what makes the linear model seem natural. It's not that intelligence is one-dimensional; it's that the intelligences we've encountered are approximately collinear. AI is the first intelligence whose evolutionary environment (gradient descent on text) differs so radically that its profile points in a genuinely different direction. The strangeness isn't in the AI. It's in the assumption that one axis was ever enough.
The practical consequence Chilson and Schwitzgebel draw: failure at seemingly simple tasks doesn't demonstrate limited general intelligence, and success at complex tasks doesn't demonstrate broad competence. Both inferences assume the linear model. The profile view says: measure each dimension independently, because the correlations you expect are artifacts of shared evolutionary history, not laws of cognition.
What's revealed when you replace the scalar with the profile isn't just a better measurement. It's a different object. The height profile doesn't just refine minimum distance — it shows that minimum distance was compressing away the structure that matters. The linear model of intelligence doesn't just underestimate AI — it mischaracterizes all intelligence by projecting a multidimensional space onto a line and calling the projection the thing itself.