2:00 pm Friday, March 4, 2016
Math-Neuro Seminar: Architectures for high-capacity neural memory by
Rishidev Chaudhuri (Center for Learning and Memory -- UT) in SEAY 4.244
Memory networks in the brain must balance two competing demands. On the one hand, they should have high capacity to store the large numbers of stimuli an organism must remember over a lifetime. On the other hand, noise is ubiquitous in the brain and memory is typically retrieved from incomplete input. Thus, memories must be encoded with some redundancy, which reduces capacity. Current neural network models of memory storage and error correction manage this tradeoff poorly, yielding either weak (linear) increases in capacity with network size or exhibiting poor robustness to noise. We show that a canonical model of neural memory — the Hopfield network — can represent a number of states exponential in network size while robustly correcting errors in a finite fraction of nodes. This answers a long-standing question about whether neural networks can combine exponential capacity with noise robustness. We construct these robust exponential-capacity Hopfield networks using recent results in coding theory, which show how error-correcting codes on large, sparse graphs (“expander graphs”) can leverage multiple weak constraints to produce near-optimal performance. The network architectures we construct exploit generic properties of large, distributed systems and map naturally to neural dynamics, suggesting appealing theoretical frameworks for understanding computation in the brain. Moreover, they suggest a computational explanation for the observed sparsity in neural responses in many cognitive brain areas. Our results thus link powerful error-correcting frameworks to Neuroscience, providing insight into principles that neurons might use and potentially offering new ways to interpret experimental data. Submitted by
|
|