9:00 am Friday, February 6, 2015
Math/Neuro Seminar: What does Bayes’ theorem tell us about learning at the synaptic level? by
Peter Latham (University College London) in NHB 1.720 (Norman Hackerman Building)
Organisms face a hard problem: based on noisy sensory input, they must set a large number of synaptic weights. However, they do not receive enough information in their lifetime to learn the correct, or optimal weights (i.e., the weights that ensure the circuit, system, and ultimately organism functions as effectively as possible). Instead, the best they could possibly do is compute a probability distribution over the optimal weights. Based on this observation, we hypothesize that synapses represent probability distribution over weights --- in contrast to the widely held belief that they represent point estimates. From this hypothesis, we derive learning rules for supervised, reinforcement and unsupervised learning. This introduces a new feature: the more uncertain the brain is about the optimal weight of a synapse, the more plastic it is. This is consistent with current data, and introduces several testable predictions. This talk is by a potential senior candidate. Submitted by
|
|