Publications by authors named "Cengiz Pehlevan"

14 Publications

  • Page 1 of 1

Activation function dependence of the storage capacity of treelike neural networks.

Phys Rev E 2021 Feb;103(2):L020301

John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA.

The expressive power of artificial neural networks crucially depends on the nonlinearity of their activation functions. Though a wide variety of nonlinear activation functions have been proposed for use in artificial neural networks, a detailed understanding of their role in determining the expressive power of a network has not emerged. Here, we study how activation functions affect the storage capacity of treelike two-layer networks. We relate the boundedness or divergence of the capacity in the infinite-width limit to the smoothness of the activation function, elucidating the relationship between previously studied special cases. Our results show that nonlinearity can both increase capacity and decrease the robustness of classification, and provide simple estimates for the capacity of networks with several commonly used activation functions. Furthermore, they generate a hypothesis for the functional benefit of dendritic spikes in branched neurons.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevE.103.L020301DOI Listing
February 2021

Contrastive Similarity Matching for Supervised Learning.

Neural Comput 2021 Feb 22:1-29. Epub 2021 Feb 22.

John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A.

We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01374DOI Listing
February 2021

Internal state configures olfactory behavior and early sensory processing in larvae.

Sci Adv 2021 Jan 1;7(1). Epub 2021 Jan 1.

Department of Physics, Harvard University, Cambridge, MA 02138, USA.

Animals exhibit different behavioral responses to the same sensory cue depending on their internal state at a given moment. How and where in the brain are sensory inputs combined with state information to select an appropriate behavior? Here, we investigate how food deprivation affects olfactory behavior in larvae. We find that certain odors repel well-fed animals but attract food-deprived animals and that feeding state flexibly alters neural processing in the first olfactory center, the antennal lobe. Hunger differentially modulates two output pathways required for opposing behavioral responses. Upon food deprivation, attraction-mediating uniglomerular projection neurons show elevated odor-evoked activity, whereas an aversion-mediating multiglomerular projection neuron receives odor-evoked inhibition. The switch between these two pathways is regulated by the lone serotonergic neuron in the antennal lobe, CSD. Our findings demonstrate how flexible behaviors can arise from state-dependent circuit dynamics in an early sensory processing center.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1126/sciadv.abd6900DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7775770PMC
January 2021

Statistical structure of the trial-to-trial timing variability in synfire chains.

Phys Rev E 2020 Nov;102(5-1):052406

John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA.

Timing and its variability are crucial for behavior. Consequently, neural circuits that take part in the control of timing and in the measurement of temporal intervals have been the subject of much research. Here we provide an analytical and computational account of the temporal variability in what is perhaps the most basic model of a timing circuit-the synfire chain. First we study the statistical structure of trial-to-trial timing variability in a reduced but analytically tractable model: a chain of single integrate-and-fire neurons. We show that this circuit's variability is well described by a generative model consisting of local, global, and jitter components. We relate each of these components to distinct neural mechanisms in the model. Next we establish in simulations that these results carry over to a noisy homogeneous synfire chain. Finally, motivated by the fact that a synfire chain is thought to underlie the circuit that takes part in the control and timing of the zebra finch song, we present simulations of a biologically realistic synfire chain model of the zebra finch timekeeping circuit. We find the structure of trial-to-trial timing variability to be consistent with our previous findings and to agree with experimental observations of the song's temporal variability. Our study therefore provides a possible neuronal account of behavioral variability in zebra finches.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevE.102.052406DOI Listing
November 2020

Neurons as Canonical Correlation Analyzers.

Front Comput Neurosci 2020 30;14:55. Epub 2020 Jun 30.

Center for Computational Biology, Flatiron Institute, New York, NY, United States.

Normative models of neural computation offer simplified yet lucid mathematical descriptions of murky biological phenomena. Previously, online Principal Component Analysis (PCA) was used to model a network of single-compartment neurons accounting for weighted summation of upstream neural activity in the soma and Hebbian/anti-Hebbian synaptic learning rules. However, synaptic plasticity in biological neurons often depends on the integration of synaptic currents over a dendritic compartment rather than total current in the soma. Motivated by this observation, we model a pyramidal neuronal network using online Canonical Correlation Analysis (CCA). Given two related datasets represented by distal and proximal dendritic inputs, CCA projects them onto the subspace which maximizes the correlation between their projections. First, adopting a normative approach and starting from a single-channel CCA objective function, we derive an online gradient-based optimization algorithm whose steps can be interpreted as the operation of a pyramidal neuron. To model networks of pyramidal neurons, we introduce a novel multi-channel CCA objective function, and derive from it an online gradient-based optimization algorithm whose steps can be interpreted as the operation of a pyramidal neuron network including its architecture, dynamics, and synaptic learning rules. Next, we model a neuron with more than two dendritic compartments by deriving its operation from a known objective function for multi-view CCA. Finally, we confirm the functionality of our networks via numerical simulations. Overall, our work presents a simplified but informative abstraction of learning in a pyramidal neuron network, and demonstrates how such networks can integrate multiple sources of inputs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fncom.2020.00055DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338892PMC
June 2020

Flexibility in motor timing constrains the topology and dynamics of pattern generator circuits.

Nat Commun 2018 03 6;9(1):977. Epub 2018 Mar 6.

Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA, 02138, USA.

Temporally precise movement patterns underlie many motor skills and innate actions, yet the flexibility with which the timing of such stereotyped behaviors can be modified is poorly understood. To probe this, we induce adaptive changes to the temporal structure of birdsong. We find that the duration of specific song segments can be modified without affecting the timing in other parts of the song. We derive formal prescriptions for how neural networks can implement such flexible motor timing. We find that randomly connected recurrent networks, a common approximation for how neocortex is wired, do not generally conform to these, though certain implementations can approximate them. We show that feedforward networks, by virtue of their one-to-one mapping between network activity and time, are better suited. Our study provides general prescriptions for pattern generator networks that implement flexible motor timing, an important aspect of many motor skills, including birdsong and human speech.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-018-03261-5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5840308PMC
March 2018

Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks?

Neural Comput 2018 01 28;30(1):84-124. Epub 2017 Sep 28.

Center for Computational Biology, Flatiron Institute, New York, NY 10010, U.S.A., and NYU Langone Medical Center, New York 10016, U.S.A.

Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01018DOI Listing
January 2018

Blind Nonnegative Source Separation Using Biological Neural Networks.

Neural Comput 2017 11 4;29(11):2925-2954. Epub 2017 Aug 4.

Center for Computational Biology, Flatiron Institute, New York, NY 10010, U.S.A., and NYU Medical School, New York, NY 10016, U.S.A.

Blind source separation-the extraction of independent sources from a mixture-is an important problem for both artificial and natural signal processing. Here, we address a special case of this problem when sources (but not the mixing matrix) are known to be nonnegative-for example, due to the physical nature of the sources. We search for the solution to this problem that can be implemented using biologically plausible neural networks. Specifically, we consider the online setting where the data set is streamed to a neural network. The novelty of our approach is that we formulate blind nonnegative source separation as a similarity matching problem and derive neural networks from the similarity matching objective. Importantly, synaptic weights in our networks are updated according to biologically plausible local learning rules.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01007DOI Listing
November 2017

Acute off-target effects of neural circuit manipulations.

Nature 2015 Dec 9;528(7582):358-63. Epub 2015 Dec 9.

Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA.

Rapid and reversible manipulations of neural activity in behaving animals are transforming our understanding of brain function. An important assumption underlying much of this work is that evoked behavioural changes reflect the function of the manipulated circuits. We show that this assumption is problematic because it disregards indirect effects on the independent functions of downstream circuits. Transient inactivations of motor cortex in rats and nucleus interface (Nif) in songbirds severely degraded task-specific movement patterns and courtship songs, respectively, which are learned skills that recover spontaneously after permanent lesions of the same areas. We resolve this discrepancy in songbirds, showing that Nif silencing acutely affects the function of HVC, a downstream song control nucleus. Paralleling song recovery, the off-target effects resolved within days of Nif lesions, a recovery consistent with homeostatic regulation of neural activity in HVC. These results have implications for interpreting transient circuit manipulations and for understanding recovery after brain lesions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/nature16442DOI Listing
December 2015

A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data.

Neural Comput 2015 Jul 14;27(7):1461-95. Epub 2015 May 14.

Simons Center for Analysis, Simons Foundation, New York, NY 10010, U.S.A.

Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/NECO_a_00745DOI Listing
July 2015

On exact statistics and classification of ergodic systems of integer dimension.

Chaos 2014 Jun;24(2):023125

Department of Physics, Brown University, Providence, Rhode Island 02912, USA.

We describe classes of ergodic dynamical systems for which some statistical properties are known exactly. These systems have integer dimension, are not globally dissipative, and are defined by a probability density and a two-form. This definition generalizes the construction of Hamiltonian systems by a Hamiltonian and a symplectic form. Some low dimensional examples are given, as well as a discretized field theory with a large number of degrees of freedom and a local nearest neighbor interaction. We also evaluate unequal-time correlations of these systems without direct numerical simulation, by Padé approximants of a short-time expansion. We briefly speculate on the possibility of constructing chaotic dynamical systems with non-integer dimension and exactly known statistics. In this case there is no probability density, suggesting an alternative construction in terms of a Hopf characteristic function and a two-form.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1063/1.4881890DOI Listing
June 2014

Selectivity and sparseness in randomly connected balanced networks.

PLoS One 2014 24;9(2):e89992. Epub 2014 Feb 24.

Swartz Program in Theoretical Neuroscience, Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America ; Edmond and Lily Safra Center for Brain Sciences, The Hebrew University, Jerusalem, Israel.

Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random connectivity, driven by random projections from an input layer of stimulus selective neurons. In this architecture, the stimulus-to-stimulus and neuron-to-neuron modulation of total synaptic input is weak compared to the mean input. Surprisingly, we show that in the balanced state the network can still support high stimulus selectivity and sparse population response. In the balanced state, strong synapses amplify the variation in synaptic input and recurrent inhibition cancels the mean. Functional specificity in connectivity emerges due to the inhomogeneity caused by the generative statistical rule used to build the network. We further elucidate the mechanism behind and evaluate the effects of model parameters on population sparseness and stimulus selectivity. Network response to mixtures of stimuli is investigated. It is shown that a balanced state with unselective inhibition can be achieved with densely connected input to inhibitory population. Balanced networks exhibit the "paradoxical" effect: an increase in excitatory drive to inhibition leads to decreased inhibitory population firing rate. We compare and contrast selectivity and sparseness generated by the balanced network to randomly connected unbalanced networks. Finally, we discuss our results in light of experiments.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0089992PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3933683PMC
October 2014

The basal ganglia is necessary for learning spectral, but not temporal, features of birdsong.

Neuron 2013 Oct 26;80(2):494-506. Epub 2013 Sep 26.

Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA.

Executing a motor skill requires the brain to control which muscles to activate at what times. How these aspects of control-motor implementation and timing-are acquired, and whether the learning processes underlying them differ, is not well understood. To address this, we used a reinforcement learning paradigm to independently manipulate both spectral and temporal features of birdsong, a complex learned motor sequence, while recording and perturbing activity in underlying circuits. Our results uncovered a striking dissociation in how neural circuits underlie learning in the two domains. The basal ganglia was required for modifying spectral, but not temporal, structure. This functional dissociation extended to the descending motor pathway, where recordings from a premotor cortex analog nucleus reflected changes to temporal, but not spectral, structure. Our results reveal a strategy in which the nervous system employs different and largely independent circuits to learn distinct aspects of a motor skill.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuron.2013.07.049DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3929499PMC
October 2013

On the asymptotics of the Hopf characteristic function.

Chaos 2012 Sep;22(3):033117

Department of Physics, Brown University, Providence, Rhode Island 02912, USA.

We study the asymptotic behavior of the Hopf characteristic function of fractals and chaotic dynamical systems in the limit of large argument. The small argument behavior is determined by the moments, since the characteristic function is defined as their generating function. Less well known is that the large argument behavior is related to the fractal dimension. While this relation has been discussed in the literature, there has been very little in the way of explicit calculation. We attempt to fill this gap, with explicit calculations for the generalized Cantor set and the Lorenz attractor. In the case of the generalized Cantor set, we define a parameter characterizing the asymptotics which we show corresponds exactly to the known fractal dimension. The Hopf characteristic function of the Lorenz attractor is computed numerically, obtaining results which are consistent with Hausdorff or correlation dimension, albeit too crude to distinguish between them.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1063/1.4734491DOI Listing
September 2012