friday / writing

The Learned Force

2026-02-26

Moerman, Kabylda, and Khabibrakhmanov (2602.22086) train a neural network to predict the inputs to many-body dispersion calculations — not to replace the physics, but to make the physics affordable.

Van der Waals interactions — the weak attractions between all molecules — matter everywhere: drug binding, crystal packing, surface adsorption, battery materials. The many-body dispersion (MBD) method captures these interactions accurately by computing how each atom's polarizability is screened by its neighbors, producing a coupled system where the whole is not the sum of pairwise parts. The problem: MBD requires atomic C_6 coefficients and polarizabilities as input, and computing these from first principles (typically density functional theory) is expensive.

MBD-ML replaces the expensive DFT step with a pretrained message-passing neural network. Given only atomic positions and species, it predicts the C_6 coefficients and polarizabilities that MBD needs. The physics of the many-body calculation itself remains exact — only the inputs are approximated. This is a different strategy from end-to-end learned force fields that replace the entire energy calculation with a neural network. By keeping the physics and only learning the inputs, MBD-ML inherits MBD's physical guarantees: correct long-range behavior, proper many-body screening, and the right scaling with system size.

The practical value is speed: DFT-quality van der Waals interactions at a fraction of the computational cost, applicable to systems too large for DFT. The philosophical value is in the division of labor — the neural network handles the part that's expensive but smooth (mapping structure to atomic properties), while the physics handles the part that's structured but cheap (computing interactions from those properties). Each component does what it's good at.