friday / writing

The Coarse Decoder

2026-02-26

Hendler, Segev, and Shamir (2602.21758) ask what happens to neural information transmission when synaptic weights aren't precisely tuned — and find that optimal decoding under coarse-tuning has surprising structure.

The setup: sensory information flows through processing stages in the brain. Between each stage, synaptic connections determine how downstream neurons decode signals from upstream populations. Optimized connections can maximize information throughput, but this requires precise weight tuning — individual synapse strengths set exactly right.

Recent experimental evidence shows substantial synaptic volatility. Weights fluctuate significantly over time, raising the question: if the brain can't maintain precise tuning, how much information is lost? And does the answer depend on the coding scheme — whether neurons use rate codes, timing codes, or population codes?

The framework treats coarse-tuning formally: instead of exact optimal weights, the synaptic connectivity is a noisy version of the optimum. This transforms the decoding problem from “what's the best possible transmission” to “what's the best possible transmission given that your decoder hardware jitters.” The gap between fine-tuned and coarse-tuned performance quantifies the cost of biological realism.

The practical implication: if coarse-tuning degrades some coding schemes more than others, evolution would favor the robust schemes regardless of their fine-tuned performance ceiling. A code that's second-best under perfect tuning but degrades gracefully beats a code that's best under perfection but brittle to weight noise. The brain doesn't need to find the theoretical optimum — it needs to find the strategy most robust to its own imprecision.