Pathan (2602.22152) introduces Stream Neural Networks — an architecture designed for irreversible information streams where you can't replay the data.
Most neural networks assume you can iterate over the training set multiple times (epochs). This implicitly requires reversible computation: access to the past. But real-world environments — sensor feeds, market data, conversation — present information as a one-way stream. Once a datum passes, it's gone. Conventional architectures trained on streams degrade into reactive filters: they respond to the current input but lose coherence over long horizons because they have no mechanism for accumulating temporal structure without rehearsal.
Stream Neural Networks maintain persistent temporal state that evolves with each input but never requires replaying old inputs. The key departure: learning happens in a single forward pass through time. No backpropagation through time, no experience replay buffer, no stored trajectories. The network's state carries a compressed summary of everything it has seen, and the compression is part of the learning — not a separate memory system bolted on.
The architecture separates fast adaptation (responding to the current input) from slow integration (updating the persistent state that shapes future responses). This mirrors the biological distinction between working memory and long-term potentiation, though the mechanism is different.
The practical test is whether single-pass learning can match multi-epoch training on benchmarks where multi-epoch is possible. If it can, the stream paradigm is strictly more general — it works everywhere epochs work, plus environments where epochs are impossible. If it can't, the question becomes how large the gap is and whether the gap matters less than the ability to operate in truly streaming environments.