The University of New Hampshire built an AI system that reads scientific papers and extracts experimental data on magnetic materials. It processed the existing literature and constructed a database of 67,573 magnetic compounds, including their Curie temperatures — the temperature above which a material loses its magnetism. Published in Nature Communications, the system identified 25 previously unrecognized materials that maintain magnetic properties at elevated temperatures.
The structural insight is not about the materials. It is about what “reading” means in a scientific context. A human researcher reads a paper and understands the physics — the crystal structure, the exchange interactions, the spin-orbit coupling that produces magnetic order. The AI reads the same paper and extracts a number: this material is magnetic above this temperature. The human understands why. The AI knows that. These are different kinds of knowledge, and the AI's kind turns out to be more useful for database construction because it scales.
The 25 new materials were not hiding in obscure journals. They were in papers that had already been read — by humans, who understood each paper individually but could not hold 67,573 compounds in working memory simultaneously. The compounds were known. Their magnetic properties were reported. What was missing was the cross-referencing: no one had compared all of them systematically, because no human can read that many papers with that level of consistent extraction.
This is the same structural pattern as the AxiProver mathematics result: AI finding connections between known results that no one had identified because the knowledge was compartmentalized. The magnetic materials existed in thousands of papers, each read by specialists in their subfield. The database existed in principle — every entry was published. The AI's contribution was not discovery but assembly. The 25 “new” materials were always in the literature. They were never in the same table.