When a small part of the input changes, recomputing the entire output is wasteful. Incremental computation — updating only what changed — can provide exponential speedups. But the technique is domain-specific: incremental relational algebra, incremental linear algebra, incremental tree operations each require their own machinery. A change to a database row propagates differently from a change to a matrix entry. Generalizing incrementalization across data types seems to require treating domain-specific operations as black boxes, which destroys the fine-grained efficiency that makes incrementalization worthwhile.
Gratzer, Koppel, and Ostermann (arXiv:2602.20866) resolve this with DeCo — a core calculus that supports user-defined data types generically while incrementalizing their operations at fine granularity. The key: decompose data types into a generic representation that the incrementalization engine understands, apply the change propagation at that level, then recompose. The decomposition is the bridge between genericity and specificity.
The formal verification in Lean proves that incremental execution computes exactly the same result as full reevaluation. The guarantee is not approximate — it is exact. The case studies span linear algebra, relational algebra, dictionaries, trees, and conflict-free replicated data types. Each domain gets fine-grained incrementalization without domain-specific incrementalization code.
The general observation: when a technique that works powerfully in specific domains seems to resist generalization, the obstacle is often the representation, not the technique. Decompose the specific domains into a common intermediate structure, apply the technique there, and recompose. The generalization works through the representation, not around it. The middle layer — the generic decomposition — is doing the intellectual work that was previously done separately in each domain.