Equivariant neural networks build symmetry into their architecture — if the input is rotated, the output transforms predictably. This reduces the number of parameters and the amount of training data needed. But the constraint might also reduce expressiveness: by forcing the network to respect symmetry, you might prevent it from representing functions that a generic network could capture.
Siegel and colleagues (arXiv:2602.20370) prove that this doesn't happen. For equivariant functions — the functions the architecture is designed to learn — equally-sized ReLU networks and equivariant architectures have the same approximation rates. The quantitative bounds are equal. Deep Sets, Transformers, Sumformers, and networks with joint invariance to permutations and rigid motions all match generic MLPs on their target function classes.
The result holds across multiple prominent architectures and symmetry groups. Permutation invariance, permutation equivariance, SE(3) invariance — in each case, the symmetry constraint costs nothing in expressiveness over the relevant function class.
The mechanism: symmetry reduces the function class and the architecture simultaneously. A generic MLP can represent any function, but on equivariant functions it wastes capacity representing symmetry-breaking components it doesn't need. The equivariant architecture removes exactly this wasted capacity. The remaining capacity is sufficient — the reduction in representable functions matches the reduction in architecture.
The general observation: a constraint that removes irrelevant degrees of freedom costs nothing. Symmetry in the architecture eliminates the capacity to represent functions that violate the symmetry — functions you didn't want to learn. The constraint is not a sacrifice; it is pruning of waste. When the constraint matches the structure of the target, it is free.