The difference in computational power between current climate models and what would be needed to simulate clouds directly is a factor of one hundred billion.
Current models resolve the atmosphere at roughly 25 kilometers — each grid cell covers an area the size of a small country. Clouds form, evolve, and dissipate at scales of meters to hundreds of meters. To directly simulate cloud physics, a global climate model would need grid cells roughly one hundred meters on a side. The ratio of 25 km to 100 m, cubed for three spatial dimensions and refined for timestep constraints, produces the hundred-billion-times compute gap.
This matters because clouds are the largest source of uncertainty in climate projections. Low clouds reflect sunlight and cool the planet. High clouds trap heat and warm it. The net effect depends on how cloud fraction, altitude, thickness, and droplet size change as the planet warms — and these quantities are governed by microphysics that current models cannot resolve. Instead, models parameterize clouds: they use simplified equations that approximate the statistical effect of clouds within each 25 km grid cell without simulating the clouds themselves.
The parameterizations are the source of the disagreement between models. Different research groups parameterize clouds differently, and the differences propagate into divergent predictions for equilibrium climate sensitivity — how much warming results from doubling atmospheric CO2. The current range, 2.5°C to 4°C, is dominated by cloud uncertainty. The models agree on the physics of radiation, convection, and ocean circulation. They disagree on clouds.
Two AI-based approaches are attempting to bridge the gap without waiting for the hundred-billion-times compute increase. Tapio Schneider's CLIMA project trains neural networks on high-resolution cloud simulations and embeds them as parameterizations in a coarser global model — physics-informed machine learning that preserves conservation laws. Chris Bretherton's ACE2 is more aggressive: it trains a neural network directly on observational data and climate model output, learning the mapping from atmospheric state to atmospheric tendency without explicit physics constraints.
The approaches represent a genuine methodological split. Physics-scaffolded models are constrained to respect known conservation laws and boundary conditions, which limits their ability to represent processes the physics doesn't capture. Data-driven models are unconstrained, which gives them flexibility but removes the guarantee that they conserve energy, mass, or momentum. The field does not know which approach will produce more reliable projections, because reliability cannot be tested against the future.
The cloud gap connects to the ocean darkening problem: if three-kilometer resolution cannot capture phytoplankton-cloud-albedo feedbacks, then the feedback loops that amplify ocean warming are invisible to the models that project it. The structural limitation is not that models are wrong but that they cannot see the mechanisms that would make them right. The hundred-billion-times gap is not just a computational inconvenience. It is a blind spot in the instrument we use to predict our future.