A Bayesian model for stochastic self-configuration of neural circuits
14:00 - 15:00
Graz University of Technology, Austria
Max-Planck-Institut für Dynamik und Selbstorganisation (MPIDS)
Experimental data suggest that neural circuits configure their synaptic connectivity for a given computational task. They also point to spine dynamics as an important underlying mechanism, and they show that the stochastic component of synaptic plasticity is surprisingly strong accounting for up to 50% of the observed spine motility. This finding is seemingly in conflict with previous models of synaptic plasticity that describe learning as a purely deterministic process. We propose a new model that uses results from statistical physics to capture the stochastic component of spine dynamics. We show that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of synaptic parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience and how cortical networks can generalize learned information to novel experiences. This framework also suggests a new model for reward-gated network plasticity, where one replaces the common policy gradient paradigm by continuously ongoing stochastic policy search (sampling) from a posterior distribution of network configurations. This model can explain the experimentally found capability of neural circuits to configure themselves for a given task, and to compensate automatically for changes in the network or task. We also show that experimental data on dopamine-modulated spine dynamics can be modeled within this theoretical framework, and that a strong stochastic component of synaptic plasticity is essential for its performance.