Publications by authors named "Tim P Vogels"

23 Publications

  • Page 1 of 1

Towards Democratizing and Automating Online Conferences: Lessons from the Neuromatch Conferences.

Trends Cogn Sci 2021 Apr 16;25(4):265-268. Epub 2021 Feb 16.

Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA. Electronic address:

Legacy conferences are costly and time consuming, and exclude scientists lacking various resources or abilities. During the 2020 pandemic, we created an online conference platform, Neuromatch Conferences (NMC), aimed at developing technological and cultural changes to make conferences more democratic, scalable, and accessible. We discuss the lessons we learned.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.tics.2021.01.007DOI Listing
April 2021

The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks.

Neural Comput 2021 Jan 29:1-27. Epub 2021 Jan 29.

Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Institute for Science and Technology, 3400 Klosterneuburg, Austria

Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01367DOI Listing
January 2021

Talking science, online.

Nat Rev Neurosci 2021 01;22(1):1-2

Institute of Science and Technology, Klosterneuburg, Austria.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41583-020-00408-6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7653669PMC
January 2021

Complementary Inhibitory Weight Profiles Emerge from Plasticity and Allow Flexible Switching of Receptive Fields.

J Neurosci 2020 12 9;40(50):9634-9649. Epub 2020 Nov 9.

Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, OX1 3SR, United Kingdom.

Cortical areas comprise multiple types of inhibitory interneurons, with stereotypical connectivity motifs that may follow specific plasticity rules. Yet, their combined effect on postsynaptic dynamics has been largely unexplored. Here, we analyze the response of a single postsynaptic model neuron receiving tuned excitatory connections alongside inhibition from two plastic populations. Synapses from each inhibitory population change according to distinct plasticity rules. We tested different combinations of three rules: Hebbian, anti-Hebbian, and homeostatic scaling. Depending on the inhibitory plasticity rule, synapses become unspecific (flat), anticorrelated to, or correlated with excitatory synapses. Crucially, the neuron's receptive field (i.e., its response to presynaptic stimuli) depends on the modulatory state of inhibition. When both inhibitory populations are active, inhibition balances excitation, resulting in uncorrelated postsynaptic responses regardless of the inhibitory tuning profiles. Modulating the activity of a given inhibitory population produces strong correlations to either preferred or nonpreferred inputs, in line with recent experimental findings that show dramatic context-dependent changes of neurons' receptive fields. We thus confirm that a neuron's receptive field does not follow directly from the weight profiles of its presynaptic afferents. Our results show how plasticity rules in various cell types can interact to shape cortical circuit motifs and their dynamics. Neurons in sensory areas of the cortex are known to respond to specific features of a given input (e.g., specific sound frequencies), but recent experimental studies show that such responses (i.e., their receptive fields) depend on context. Inspired by the cortical connectivity, we built models of excitatory and inhibitory inputs onto a single neuron, to study how receptive fields may change on short and long time scales. We show how various synaptic plasticity rules allow for the emergence of diverse connectivity profiles and, moreover, how their dynamic interaction creates a mechanism by which postsynaptic responses can quickly change. Our work emphasizes multiple roles of inhibition in cortical processing and provides a first mechanistic model for flexible receptive fields.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.0276-20.2020DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7726533PMC
December 2020

Training deep neural density estimators to identify mechanistic models of neural dynamics.

Elife 2020 09 17;9. Epub 2020 Sep 17.

Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of Munich, Munich, Germany.

Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-trained using model simulations-to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.7554/eLife.56261DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7581433PMC
September 2020

Think: Theory for Africa.

PLoS Comput Biol 2019 07 11;15(7):e1007049. Epub 2019 Jul 11.

Division of Cell Biology, Department of Human Biology, Neuroscience Institute and Institute of Infectious Disease and Molecular Medicine, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pcbi.1007049DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6622465PMC
July 2019

Neural mechanisms of attending to items in working memory.

Neurosci Biobehav Rev 2019 06 26;101:1-12. Epub 2019 Mar 26.

Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom.

Working memory, the ability to keep recently accessed information available for immediate manipulation, has been proposed to rely on two mechanisms that appear difficult to reconcile: self-sustained neural firing, or the opposite-activity-silent synaptic traces. Here we review and contrast models of these two mechanisms, and then show that both phenomena can co-exist within a unified system in which neurons hold information in both activity and synapses. Rapid plasticity in flexibly-coding neurons allows features to be bound together into objects, with an important emergent property being the focus of attention. One memory item is held by persistent activity in an attended or "focused" state, and is thus remembered better than other items. Other, previously attended items can remain in memory but in the background, encoded in activity-silent synaptic traces. This dual functional architecture provides a unified common mechanism accounting for a diversity of perplexing attention and memory effects that have been hitherto difficult to explain in a single theoretical framework.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neubiorev.2019.03.017DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6525322PMC
June 2019

Publisher Correction: Motor primitives in space and time via targeted gain modulation in cortical networks.

Nat Neurosci 2019 Mar;22(3):504

Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.

In the version of this article initially published, in the PDF, equations (2) and (4) erroneously displayed a curly bracket on the right hand side of the equation. This should not be there. The errors have been corrected in the PDF version of the article. The equations appear correctly in the HTML.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41593-018-0307-xDOI Listing
March 2019

Motor primitives in space and time via targeted gain modulation in cortical networks.

Nat Neurosci 2018 12 26;21(12):1774-1783. Epub 2018 Nov 26.

Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.

Motor cortex (M1) exhibits a rich repertoire of neuronal activities to support the generation of complex movements. Although recent neuronal-network models capture many qualitative aspects of M1 dynamics, they can generate only a few distinct movements. Additionally, it is unclear how M1 efficiently controls movements over a wide range of shapes and speeds. We demonstrate that modulation of neuronal input-output gains in recurrent neuronal-network models with a fixed architecture can dramatically reorganize neuronal activity and thus downstream muscle outputs. Consistent with the observation of diffuse neuromodulatory projections to M1, a relatively small number of modulatory control units provide sufficient flexibility to adjust high-dimensional network activity using a simple reward-based learning rule. Furthermore, it is possible to assemble novel movements from previously learned primitives, and one can separately change movement speed while preserving movement shape. Our results provide a new perspective on the role of modulatory systems in controlling recurrent cortical activity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41593-018-0276-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6276991PMC
December 2018

Cortical Signal Propagation: Balance, Amplify, Transmit.

Neuron 2018 04;98(1):8-9

Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3SR, UK. Electronic address:

The neural code of cortical processing remains uncracked; however, it must necessarily rely on faithful signal propagation between cortical areas. In this issue of Neuron, Joglekar et al. (2018) show that strong inter-areal excitation balanced by local inhibition can enable reliable signal propagation in data-constrained network models of macaque cortex.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuron.2018.03.028DOI Listing
April 2018

Synaptic Transmission Optimization Predicts Expression Loci of Long-Term Plasticity.

Neuron 2017 Sep;96(1):177-189.e7

Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK.

Long-term modifications of neuronal connections are critical for reliable memory storage in the brain. However, their locus of expression-pre- or postsynaptic-is highly variable. Here we introduce a theoretical framework in which long-term plasticity performs an optimization of the postsynaptic response statistics toward a given mean with minimal variance. Consequently, the state of the synapse at the time of plasticity induction determines the ratio of pre- and postsynaptic modifications. Our theory explains the experimentally observed expression loci of the hippocampal and neocortical synaptic potentiation studies we examined. Moreover, the theory predicts presynaptic expression of long-term depression, consistent with experimental observations. At inhibitory synapses, the theory suggests a statistically efficient excitatory-inhibitory balance in which changes in inhibitory postsynaptic response statistics specifically target the mean excitation. Our results provide a unifying theory for understanding the expression mechanisms and functions of long-term synaptic transmission plasticity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuron.2017.09.021DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5626823PMC
September 2017

Inhibitory engrams in perception and memory.

Proc Natl Acad Sci U S A 2017 06 13;114(26):6666-6674. Epub 2017 Jun 13.

Trinity College Institute of Neuroscience, School of Genetics and Microbiology and School of Natural Sciences, Trinity College Dublin, Dublin, Ireland;

Nervous systems use excitatory cell assemblies to encode and represent sensory percepts. Similarly, synaptically connected cell assemblies or "engrams" are thought to represent memories of past experience. Multiple lines of recent evidence indicate that brain systems create and use inhibitory replicas of excitatory representations for important cognitive functions. Such matched "inhibitory engrams" can form through homeostatic potentiation of inhibition onto postsynaptic cells that show increased levels of excitation. Inhibitory engrams can reduce behavioral responses to familiar stimuli, thereby resulting in behavioral habituation. In addition, by preventing inappropriate activation of excitatory memory engrams, inhibitory engrams can make memories quiescent, stored in a latent form that is available for context-relevant activation. In neural networks with balanced excitatory and inhibitory engrams, the release of innate responses and recall of associative memories can occur through focused disinhibition. Understanding mechanisms that regulate the formation and expression of inhibitory engrams in vivo may help not only to explain key features of cognition but also to provide insight into transdiagnostic traits associated with psychiatric conditions such as autism, schizophrenia, and posttraumatic stress disorder.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1073/pnas.1701812114DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5495250PMC
June 2017

Inhibitory Plasticity: Balance, Control, and Codependence.

Annu Rev Neurosci 2017 07 9;40:557-579. Epub 2017 Jun 9.

Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3SR, United Kingdom; email:

Inhibitory neurons, although relatively few in number, exert powerful control over brain circuits. They stabilize network activity in the face of strong feedback excitation and actively engage in computations. Recent studies reveal the importance of a precise balance of excitation and inhibition in neural circuits, which often requires exquisite fine-tuning of inhibitory connections. We review inhibitory synaptic plasticity and its roles in shaping both feedforward and feedback control. We discuss the necessity of complex, codependent plasticity mechanisms to build nontrivial, functioning networks, and we end by summarizing experimental evidence of such interactions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1146/annurev-neuro-072116-031005DOI Listing
July 2017

Editorial overview: Neurobiology of learning and plasticity 2017.

Curr Opin Neurobiol 2017 04 17;43:A1-A5. Epub 2017 Apr 17.

Brandeis University, Department of Biology, MS008, 415 South St., Waltham, MA 02454-9110, United States. Electronic address:

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.conb.2017.04.002DOI Listing
April 2017

Mapping the function of neuronal ion channels in model and experiment.

Elife 2017 03 6;6. Epub 2017 Mar 6.

Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, United Kingdom.

Ion channel models are the building blocks of computational neuron models. Their biological fidelity is therefore crucial for the interpretation of simulations. However, the number of published models, and the lack of standardization, make the comparison of ion channel models with one another and with experimental data difficult. Here, we present a framework for the automated large-scale classification of ion channel models. Using annotated metadata and responses to a set of voltage-clamp protocols, we assigned 2378 models of voltage- and calcium-gated ion channels coded in to 211 clusters. The (ICGenealogy) web interface provides an interactive resource for the categorization of new and existing models and experimental recordings. It enables quantitative comparisons of simulated and/or measured ion channel kinetics, and facilitates field-wide standardization of experimentally-constrained modeling.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.7554/eLife.22152DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5340531PMC
March 2017

Activity-dependent dendritic spine neck changes are correlated with synaptic strength.

Proc Natl Acad Sci U S A 2014 Jul 30;111(28):E2895-904. Epub 2014 Jun 30.

Department of Biological Sciences, Columbia University, New York, NY 10027;

Most excitatory inputs in the mammalian brain are made on dendritic spines, rather than on dendritic shafts. Spines compartmentalize calcium, and this biochemical isolation can underlie input-specific synaptic plasticity, providing a raison d'etre for spines. However, recent results indicate that the spine can experience a membrane potential different from that in the parent dendrite, as though the spine neck electrically isolated the spine. Here we use two-photon calcium imaging of mouse neocortical pyramidal neurons to analyze the correlation between the morphologies of spines activated under minimal synaptic stimulation and the excitatory postsynaptic potentials they generate. We find that excitatory postsynaptic potential amplitudes are inversely correlated with spine neck lengths. Furthermore, a spike timing-dependent plasticity protocol, in which two-photon glutamate uncaging over a spine is paired with postsynaptic spikes, produces rapid shrinkage of the spine neck and concomitant increases in the amplitude of the evoked spine potentials. Using numerical simulations, we explore the parameter regimes for the spine neck resistance and synaptic conductance changes necessary to explain our observations. Our data, directly correlating synaptic and morphological plasticity, imply that long-necked spines have small or negligible somatic voltage contributions, but that, upon synaptic stimulation paired with postsynaptic activity, they can shorten their necks and increase synaptic efficacy, thus changing the input/output gain of pyramidal neurons.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1073/pnas.1321869111DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4104910PMC
July 2014

Optimal control of transient dynamics in balanced networks supports generation of complex movements.

Neuron 2014 Jun;82(6):1394-406

School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland.

Populations of neurons in motor cortex engage in complex transient dynamics of large amplitude during the execution of limb movements. Traditional network models with stochastically assigned synapses cannot reproduce this behavior. Here we introduce a class of cortical architectures with strong and random excitatory recurrence that is stabilized by intricate, fine-tuned inhibition, optimized from a control theory perspective. Such networks transiently amplify specific activity states and can be used to reliably execute multidimensional movement patterns. Similar to the experimental observations, these transients must be preceded by a steady-state initialization phase from which the network relaxes back into the background state by way of complex internal dynamics. In our networks, excitation and inhibition are as tightly balanced as recently reported in experiments across several brain areas, suggesting inhibitory control of complex excitatory recurrence as a generic organizational principle in cortex.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuron.2014.04.045DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6364799PMC
June 2014

Connection-type-specific biases make uniform random network models consistent with cortical recordings.

J Neurophysiol 2014 Oct 18;112(8):1801-14. Epub 2014 Jun 18.

School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Centre for Neural Circuits and Behaviour, Department of Anatomy, Physiology and Genetics, The University of Oxford, Oxford, United Kingdom

Uniform random sparse network architectures are ubiquitous in computational neuroscience, but the implicit hypothesis that they are a good representation of real neuronal networks has been met with skepticism. Here we used two experimental data sets, a study of triplet connectivity statistics and a data set measuring neuronal responses to channelrhodopsin stimuli, to evaluate the fidelity of thousands of model networks. Network architectures comprised three neuron types (excitatory, fast spiking, and nonfast spiking inhibitory) and were created from a set of rules that govern the statistics of the resulting connection types. In a high-dimensional parameter scan, we varied the degree distributions (i.e., how many cells each neuron connects with) and the synaptic weight correlations of synapses from or onto the same neuron. These variations converted initially uniform random and homogeneously connected networks, in which every neuron sent and received equal numbers of synapses with equal synaptic strength distributions, to highly heterogeneous networks in which the number of synapses per neuron, as well as average synaptic strength of synapses from or to a neuron were variable. By evaluating the impact of each variable on the network structure and dynamics, and their similarity to the experimental data, we could falsify the uniform random sparse connectivity hypothesis for 7 of 36 connectivity parameters, but we also confirmed the hypothesis in 8 cases. Twenty-one parameters had no substantial impact on the results of the test protocols we used.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00629.2013DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4200009PMC
October 2014

Non-normal amplification in random balanced neuronal networks.

Phys Rev E Stat Nonlin Soft Matter Phys 2012 Jul 11;86(1 Pt 1):011909. Epub 2012 Jul 11.

School of Computer and Communication Sciences and Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, 1015 EPFL, Switzerland.

In dynamical models of cortical networks, the recurrent connectivity can amplify the input given to the network in two distinct ways. One is induced by the presence of near-critical eigenvalues in the connectivity matrix W, producing large but slow activity fluctuations along the corresponding eigenvectors (dynamical slowing). The other relies on W not being normal, which allows the network activity to make large but fast excursions along specific directions. Here we investigate the trade-off between non-normal amplification and dynamical slowing in the spontaneous activity of large random neuronal networks composed of excitatory and inhibitory neurons. We use a Schur decomposition of W to separate the two amplification mechanisms. Assuming linear stochastic dynamics, we derive an exact expression for the expected amount of purely non-normal amplification. We find that amplification is very limited if dynamical slowing must be kept weak. We conclude that, to achieve strong transient amplification with little slowing, the connectivity must be structured. We show that unidirectional connections between neurons of the same type together with reciprocal connections between neurons of different types, allow for amplification already in the fast dynamical regime. Finally, our results also shed light on the differences between balanced networks in which inhibition exactly cancels excitation and those where inhibition dominates.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevE.86.011909DOI Listing
July 2012

State-dependent function of neocortical chandelier cells.

J Neurosci 2011 Dec;31(49):17872-86

Department Biological Sciences, Howard Hughes Medical Institute, Columbia University, New York, New York 10027, USA.

Chandelier (axoaxonic) cells (ChCs) are a distinct group of GABAergic interneurons that innervate the axon initial segments of pyramidal cells. However, their circuit role and the function of their clearly defined anatomical specificity remain unclear. Recent work has demonstrated that chandelier cells can produce depolarizing GABAergic PSPs, occasionally driving postsynaptic targets to spike. On the other hand, other work suggests that ChCs are hyperpolarizing and may have an inhibitory role. These disparate functional effects may reflect heterogeneity among ChCs. Here, using brain slices from transgenic mouse strains, we first demonstrate that, across different neocortical areas and genetic backgrounds, upper Layer 2/3 ChCs belong to a single electrophysiologically and morphologically defined population, extensively sampling Layer 1 inputs with asymmetric dendrites. Consistent with being a single cell type, we find electrical coupling between ChCs. We then investigate the effect of chandelier cell activation on pyramidal neuron spiking in several conditions, ranging from the resting membrane potential to stimuli designed to approximate in vivo membrane potential dynamics. We find that under quiescent conditions, chandelier cells are capable of both promoting and inhibiting spike generation, depending on the postsynaptic membrane potential. However, during in vivo-like membrane potential fluctuations, the dominant postsynaptic effect was a strong inhibition. Thus, neocortical chandelier cells, even from within a homogeneous population, appear to play a dual role in the circuit, helping to activate quiescent pyramidal neurons, while at the same time inhibiting active ones.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.3894-11.2011DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4071969PMC
December 2011

Gating multiple signals through detailed balance of excitation and inhibition in spiking networks.

Nat Neurosci 2009 Apr 22;12(4):483-91. Epub 2009 Mar 22.

Center for Neurobiology and Behavior, Department of Physiology and Cellular Biophysics, Columbia University College of Physicians and Surgeons, New York, New York, USA.

Recent theoretical work has provided a basic understanding of signal propagation in networks of spiking neurons, but mechanisms for gating and controlling these signals have not been investigated previously. Here we introduce an idea for the gating of multiple signals in cortical networks that combines principles of signal propagation with aspects of balanced networks. Specifically, we studied networks in which incoming excitatory signals are normally cancelled by locally evoked inhibition, leaving the targeted layer unresponsive. Transmission can be gated 'on' by modulating excitatory and inhibitory gains to upset this detailed balance. We illustrate gating through detailed balance in large networks of integrate-and-fire neurons. We show successful gating of multiple signals and study failure modes that produce effects reminiscent of clinically observed pathologies. Provided that the individual signals are detectable, detailed balance has a large capacity for gating multiple signals.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/nn.2276DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2693069PMC
April 2009

Signal propagation and logic gating in networks of integrate-and-fire neurons.

J Neurosci 2005 Nov;25(46):10786-95

Volen Center for Complex Systems and Department of Biology, Brandeis University, Waltham, Massachusetts 02454-9110, USA.

Transmission of signals within the brain is essential for cognitive function, but it is not clear how neural circuits support reliable and accurate signal propagation over a sufficiently large dynamic range. Two modes of propagation have been studied: synfire chains, in which synchronous activity travels through feedforward layers of a neuronal network, and the propagation of fluctuations in firing rate across these layers. In both cases, a sufficient amount of noise, which was added to previous models from an external source, had to be included to support stable propagation. Sparse, randomly connected networks of spiking model neurons can generate chaotic patterns of activity. We investigate whether this activity, which is a more realistic noise source, is sufficient to allow for signal transmission. We find that, for rate-coded signals but not for synfire chains, such networks support robust and accurate signal reproduction through up to six layers if appropriate adjustments are made in synaptic strengths. We investigate the factors affecting transmission and show that multiple signals can propagate simultaneously along different pathways. Using this feature, we show how different types of logic gates can arise within the architecture of the random network through the strengthening of specific synapses.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.3508-05.2005DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6725859PMC
November 2005

Neural network dynamics.

Annu Rev Neurosci 2005 ;28:357-76

Volen Center for Complex Systems and Department of Biology, Brandeis University, Waltham, MA 02454-9110, USA.

Neural network modeling is often concerned with stimulus-driven responses, but most of the activity in the brain is internally generated. Here, we review network models of internally generated activity, focusing on three types of network dynamics: (a) sustained responses to transient stimuli, which provide a model of working memory; (b) oscillatory network activity; and (c) chaotic activity, which models complex patterns of background spiking in cortical and other circuits. We also review propagation of stimulus-driven activity through spontaneously active networks. Exploring these aspects of neural network dynamics is critical for understanding how neural circuits produce cognitive function.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1146/annurev.neuro.28.061604.135637DOI Listing
September 2005