2,385 results match your criteria Neural computation[Journal]


Decoding Movements from Cortical Ensemble Activity Using a Long Short-Term Memory Recurrent Network.

Neural Comput 2019 Apr 12:1-29. Epub 2019 Apr 12.

Departments of Neurobiology, Biomedical Engineering, Psychology and Neuroscience, Neurology, Neurosurgery, and Duke University Center for Neuroengineering, Duke University, Durham, NC 27710, U.S.A.; and Edmund and Lily Safra International Institute of Neuroscience, Natal Brazil 59066060

Although many real-time neural decoding algorithms have been proposed for brain-machine interface (BMI) applications over the years, an optimal, consensual approach remains elusive. Recent advances in deep learning algorithms provide new opportunities for improving the design of BMI decoders, including the use of recurrent artificial neural networks to decode neuronal ensemble activity in real time. Here, we developed a long-short term memory (LSTM) decoder for extracting movement kinematics from the activity of large ( N = 134-402) populations of neurons, sampled simultaneously from multiple cortical areas, in rhesus monkeys performing motor tasks. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01189DOI Listing

Controlling Complexity of Cerebral Cortex Simulations, II: Streamlined Microcircuits.

Neural Comput 2019 Apr 12:1-19. Epub 2019 Apr 12.

Clinical Neurosciences, Neurology, University of Helsinki and Helsinki University Hospital, Helsinki 00029, Finland

Recently, Markram et al. (2015) presented a model of the rat somatosensory microcircuit (Markram model). Their model is high in anatomical and physiological detail, and its simulation requires supercomputers. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/10.1162/neco_a_01188
Publisher Site
http://dx.doi.org/10.1162/neco_a_01188DOI Listing
April 2019
1 Read

A Geometrical Analysis of Global Stability in Trained Feedback Networks.

Neural Comput 2019 Apr 12:1-43. Epub 2019 Apr 12.

Laboratoire de Physique Statistique, CNRS UMR 8550, Ecole Normale Supérieure-PSL Research University, Paris 75005, France

Recurrent neural networks have been extensively studied in the context of neuroscience and machine learning due to their ability to implement complex computations. While substantial progress in designing effective learning algorithms has been achieved, a full understanding of trained recurrent networks is still lacking. Specifically, the mechanisms that allow computations to emerge from the underlying recurrent dynamics are largely unknown. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01187DOI Listing
April 2019
2 Reads

Quantifying Information Conveyed by Large Neuronal Populations.

Neural Comput 2019 Apr 12:1-33. Epub 2019 Apr 12.

Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, and Department of Physics, University of California San Diego, San Diego, CA 92093, U.S.A.

Quantifying mutual information between inputs and outputs of a large neural circuit is an important open problem in both machine learning and neuroscience. However, evaluation of the mutual information is known to be generally intractable for large systems due to the exponential growth in the number of terms that need to be evaluated. Here we show how information contained in the responses of large neural populations can be effectively computed provided the input-output functions of individual neurons can be measured and approximated by a logistic function applied to a potentially nonlinear function of the stimulus. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01193DOI Listing

Improving the Antinoise Ability of DNNs via a Bio-Inspired Noise Adaptive Activation Function Rand Softplus.

Neural Comput 2019 Apr 12:1-19. Epub 2019 Apr 12.

School of Computers, Guangdong University of Technology, Guangzhou 51006, China

Although deep neural networks (DNNs) have led to many remarkable results in cognitive tasks, they are still far from catching up with human-level cognition in antinoise capability. New research indicates how brittle and susceptible current models are to small variations in data distribution. In this letter, we study the stochasticity-resistance character of biological neurons by simulating the input-output response process of a leaky integrate-and-fire (LIF) neuron model and proposed a novel activation function, rand softplus, (RSP) to model the response process. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01192DOI Listing
April 2019
2.207 Impact Factor

Asynchronous Event-Based Motion Processing: From Visual Events to Probabilistic Sensory Representation.

Neural Comput 2019 Apr 12:1-25. Epub 2019 Apr 12.

Natural Vision and Computation Team, Vision Institute, Université Pierre et Marie Curie-Paris 6 (UPMC), Sorbonne Université UMR S968 Inserm, UPMC, CHNO des Quinze-Vingts, CNRS UMRS 7210, Paris 75012, France; University of Pittsburgh Medical Center, Pittsburgh, PA 15213; and Carnegie Mellon University, Robotics Institute, Pittsburgh, PA 15213, U.S.A.

In this work, we propose a two-layered descriptive model for motion processing from retina to the cortex, with an event-based input from the asynchronous time-based image sensor (ATIS) camera. Spatial and spatiotemporal filtering of visual scenes by motion energy detectors has been implemented in two steps in a simple layer of a lateral geniculate nucleus model and a set of three-dimensional Gabor kernels, eventually forming a probabilistic population response. The high temporal resolution of independent and asynchronous local sensory pixels from the ATIS provides a realistic stimulation to study biological motion processing, as well as developing bio-inspired motion processors for computer vision applications. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01191DOI Listing

Learning Moral Graphs in Construction of High-Dimensional Bayesian Networks for Mixed Data.

Neural Comput 2019 Apr 12:1-32. Epub 2019 Apr 12.

Department of Statistics, Purdue University, West Lafayette, IN 47906, U.S.A.

Bayesian networks have been widely used in many scientific fields for describing the conditional independence relationships for a large set of random variables. This letter proposes a novel algorithm, the so-called -learning algorithm, for learning the moral graph for high-dimensional Bayesian networks. The moral graph is a Markov network representation of the Bayesian network and also the key to construction of the Bayesian network for constraint-based algorithms. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/10.1162/neco_a_01190
Publisher Site
http://dx.doi.org/10.1162/neco_a_01190DOI Listing
April 2019
1 Read

Vector-Derived Transformation Binding: An Improved Binding Operation for Deep Symbol-Like Processing in Neural Networks.

Neural Comput 2019 May 18;31(5):849-869. Epub 2019 Mar 18.

Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2L 3G1 Canada

We present a new binding operation, vector-derived transformation binding (VTB), for use in vector symbolic architectures (VSA). The performance of VTB is compared to circular convolution, used in holographic reduced representations (HRRs), in terms of list and stack encoding capacity. A special focus is given to the possibility of a neural implementation by the means of the Neural Engineering Framework (NEF). Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/10.1162/neco_a_01179
Publisher Site
http://dx.doi.org/10.1162/neco_a_01179DOI Listing
May 2019
10 Reads

Information Geometry for Regularized Optimal Transport and Barycenters of Patterns.

Neural Comput 2019 May 18;31(5):827-848. Epub 2019 Mar 18.

Google, 75009 Paris and CREST, ENSAE, 91120 Palaiseau, France

We propose a new divergence on the manifold of probability distributions, building on the entropic regularization of optimal transportation problems. As Cuturi ( 2013 ) showed, regularizing the optimal transport problem with an entropic term is known to bring several computational benefits. However, because of that regularization, the resulting approximation of the optimal transport cost does not define a proper distance or divergence between probability distributions. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01178DOI Listing

Inhibition and Excitation Shape Activity Selection: Effect of Oscillations in a Decision-Making Circuit.

Neural Comput 2019 May 18;31(5):870-896. Epub 2019 Mar 18.

Department of Computer Science, University of Sheffield, Sheffield, U.K.

Decision making is a complex task, and its underlying mechanisms that regulate behavior, such as the implementation of the coupling between physiological states and neural networks, are hard to decipher. To gain more insight into neural computations underlying ongoing binary decision-making tasks, we consider a neural circuit that guides the feeding behavior of a hypothetical animal making dietary choices. We adopt an inhibition motif from neural network theory and propose a dynamical system characterized by nonlinear feedback, which links mechanism (the implementation of the neural circuit and its coupling to the animal's nutritional state) and function (improving behavioral performance). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01185DOI Listing

Introducing User-Prescribed Constraints in Markov Chains for Nonlinear Dimensionality Reduction.

Neural Comput 2019 May 18;31(5):980-997. Epub 2019 Mar 18.

Department of Systems Biology, Columbia University, New York, NY 10032, U.S.A.

Stochastic kernel-based dimensionality-reduction approaches have become popular in the past decade. The central component of many of these methods is a symmetric kernel that quantifies the vicinity between pairs of data points and a kernel-induced Markov chain on the data. Typically, the Markov chain is fully specified by the kernel through row normalization. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01184DOI Listing

Semisupervised Deep Stacking Network with Adaptive Learning Rate Strategy for Motor Imagery EEG Recognition.

Neural Comput 2019 May 18;31(5):919-942. Epub 2019 Mar 18.

Key Laboratory of Network Control and Intelligent Instrument, Chongqing University of Posts and Telecommunications, Ministry of Education, Chongqing 400065, China

Practical motor imagery electroencephalogram (EEG) data-based applications are limited by the waste of unlabeled samples in supervised learning and excessive time consumption in the pretraining period. A semisupervised deep stacking network with an adaptive learning rate strategy (SADSN) is proposed to solve the sample loss caused by supervised learning of EEG data and the extraction of manual features. The SADSN adopts the idea of an adaptive learning rate into a contrastive divergence (CD) algorithm to accelerate its convergence. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01183DOI Listing
May 2019
2 Reads
2.207 Impact Factor

Multiple Timescale Online Learning Rules for Information Maximization with Energetic Constraints.

Neural Comput 2019 May 18;31(5):943-979. Epub 2019 Mar 18.

Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, 63130, U.S.A.

A key aspect of the neural coding problem is understanding how representations of afferent stimuli are built through the dynamics of learning and adaptation within neural networks. The infomax paradigm is built on the premise that such learning attempts to maximize the mutual information between input stimuli and neural activities. In this letter, we tackle the problem of such information-based neural coding with an eye toward two conceptual hurdles. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01182DOI Listing

Sparse Associative Memory.

Authors:
Heiko Hoffmann

Neural Comput 2019 May 18;31(5):998-1014. Epub 2019 Mar 18.

HRL Laboratories, Malibu, CA 90265, U.S.A.

It is still unknown how associative biological memories operate. Hopfield networks are popular models of associative memory, but they suffer from spurious memories and low efficiency. Here, we present a new model of an associative memory that overcomes these deficiencies. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01181DOI Listing

Brain Morphometry Methods for Feature Extraction in Random Subspace Ensemble Neural Network Classification of First-Episode Schizophrenia.

Neural Comput 2019 May 18;31(5):897-918. Epub 2019 Mar 18.

Masaryk University and University Hospital Brno, Department of Psychiatry, 625 00, Brno, Czech Republic

Machine learning (ML) is a growing field that provides tools for automatic pattern recognition. The neuroimaging community currently tries to take advantage of ML in order to develop an auxiliary diagnostic tool for schizophrenia diagnostics. In this letter, we present a classification framework based on features extracted from magnetic resonance imaging (MRI) data using two automatic whole-brain morphometry methods: voxel-based (VBM) and deformation-based morphometry (DBM). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01180DOI Listing
May 2019
1 Read

Multiclass Alpha Integration of Scores from Multiple Classifiers.

Neural Comput 2019 04 14;31(4):806-825. Epub 2019 Feb 14.

Universitat Politècnica de València, Instituto de Telecomunicaciones y Aplicaciones Multimedia, 46022 Valencia, Spain

Alpha integration methods have been used for integrating stochastic models and fusion in the context of detection (binary classification). Our work proposes separated score integration (SSI), a new method based on alpha integration to perform soft fusion of scores in multiclass classification problems, one of the most common problems in automatic classification. Theoretical derivation is presented to optimize the parameters of this method to achieve the least mean squared error (LMSE) or the minimum probability of error (MPE). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01169DOI Listing
April 2019
1 Read

Decreasing the Size of the Restricted Boltzmann Machine.

Neural Comput 2019 04 14;31(4):784-805. Epub 2019 Feb 14.

Graduate School of Information Science and Technology, Department of Mathematical informatics, University of Tokyo, Bunkyo-ku, Tokyo 113-8654, Japan

In this letter, we propose a method to decrease the number of hidden units of the restricted Boltzmann machine while avoiding a decrease in the performance quantified by the Kullback-Leibler divergence. Our algorithm is then demonstrated by numerical simulations. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01176DOI Listing
April 2019
2 Reads

Deconstructing Odorant Identity via Primacy in Dual Networks.

Neural Comput 2019 04 14;31(4):710-737. Epub 2019 Feb 14.

Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, U.S.A.

In the olfactory system, odor percepts retain their identity despite substantial variations in concentration, timing, and background. We study a novel strategy for encoding intensity-invariant stimulus identity that is based on representing relative rather than absolute values of stimulus features. For example, in what is known as the primacy coding model, odorant identities are represented by the conditions that some odorant receptors are activated more strongly than others. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01175DOI Listing
April 2019
1 Read

Gated Orthogonal Recurrent Units: On Learning to Forget.

Neural Comput 2019 04 14;31(4):765-783. Epub 2019 Feb 14.

University of Montreal, Montreal H3T 1J4, Quebec, Canada

We present a novel recurrent neural network (RNN)-based model that combines the remembering ability of unitary evolution RNNs with the ability of gated RNNs to effectively forget redundant or irrelevant information in its memory. We achieve this by extending restricted orthogonal evolution RNNs with a gating mechanism similar to gated recurrent unit RNNs with a reset gate and an update gate. Our model is able to outperform long short-term memory, gated recurrent units, and vanilla unitary or orthogonal RNNs on several long-term-dependency benchmark tasks. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01174DOI Listing
April 2019
1 Read

Biologically Realistic Mean-Field Models of Conductance-Based Networks of Spiking Neurons with Adaptation.

Neural Comput 2019 04 14;31(4):653-680. Epub 2019 Feb 14.

Unité de Neuroscience, Information et Complexité, CNRS FRE 3693, 91198 Gif sur Yvette, France, and European Institute for Theoretical Neuroscience, 75012 Paris, France

Accurate population models are needed to build very large-scale neural models, but their derivation is difficult for realistic networks of neurons, in particular when nonlinear properties are involved, such as conductance-based interactions and spike-frequency adaptation. Here, we consider such models based on networks of adaptive exponential integrate-and-fire excitatory and inhibitory neurons. Using a master equation formalism, we derive a mean-field model of such networks and compare it to the full network dynamics. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01173DOI Listing

A Distributed Framework for the Construction of Transport Maps.

Neural Comput 2019 04 14;31(4):613-652. Epub 2019 Feb 14.

Department of Bioengineering, University of California, San Diego, La Jolla, CA 92093, U.S.A.

The need to reason about uncertainty in large, complex, and multimodal data sets has become increasingly common across modern scientific environments. The ability to transform samples from one distribution Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01172DOI Listing

Estimating Scale-Invariant Future in Continuous Time.

Neural Comput 2019 04 14;31(4):681-709. Epub 2019 Feb 14.

Center for Memory and Brain, Department of Psychological and Brain Sciences, Boston, MA 02215, U.S.A.

Natural learners must compute an estimate of future outcomes that follow from a stimulus in continuous time. Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially discounted future reward using the Bellman equation (model-free algorithms). An important drawback of model-based algorithms is that computational cost grows linearly with the amount of time to be simulated. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01171DOI Listing
April 2019
1 Read

Filtering Compensation for Delays and Prediction Errors during Sensorimotor Control.

Neural Comput 2019 04 14;31(4):738-764. Epub 2019 Feb 14.

Institute of Information and Communication Technologies, Electronics and Applied Mathematics, University of Louvain, Louvain-la-Neuve 1348, Belgium

Compensating for sensorimotor noise and for temporal delays has been identified as a major function of the nervous system. Although these aspects have often been described separately in the frameworks of optimal cue combination or motor prediction during movement planning, control-theoretic models suggest that these two operations are performed simultaneously, and mounting evidence supports that motor commands are based on sensory predictions rather than sensory states. In this letter, we study the benefit of state estimation for predictive sensorimotor control. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01170DOI Listing

A Novel Optimization Framework to Improve the Computational Cost of Muscle Activation Prediction for a Neuromusculoskeletal System.

Neural Comput 2019 03 15;31(3):574-595. Epub 2019 Jan 15.

Department of Mechanical Engineering, Kyushu University, Nishi-ku, Fukuoka 819-0395, Japan

The high computational cost (CC) of neuromusculoskeletal modeling is usually considered a serious barrier in clinical applications. Different approaches have been developed to lessen CC and amplify the accuracy of muscle activation prediction based on forward and inverse analyses by applying different optimization algorithms. This study is aimed at proposing two novel approaches, inverse muscular dynamics with inequality constraints (IMDIC) and inverse-forward muscular dynamics with inequality constraints (IFMDIC), not only to reduce CC but also to amend the computational errors compared to the well-known approach of extended inverse dynamics (EID). Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0116
Publisher Site
http://dx.doi.org/10.1162/neco_a_01167DOI Listing
March 2019
7 Reads

Advancing System Performance with Redundancy: From Biological to Artificial Designs.

Neural Comput 2019 03 15;31(3):555-573. Epub 2019 Jan 15.

Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, U.S.A.

Redundancy is a fundamental characteristic of many biological processes such as those in the genetic, visual, muscular, and nervous systems, yet its driven mechanism has not been fully comprehended. Until recently, the only understanding of redundancy is as a mean to attain fault tolerance, which is reflected in the design of many man-made systems. On the contrary, our previous work on redundant sensing (RS) has demonstrated an example where redundancy can be engineered solely for enhancing accuracy and precision. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01166DOI Listing
March 2019
5 Reads

State-Space Representations of Deep Neural Networks.

Neural Comput 2019 03 15;31(3):538-554. Epub 2019 Jan 15.

Department of Mechanical Engineering, Pennsylvania State University, University Park, PA 16802, U.S.A.

This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01165DOI Listing
March 2019
7 Reads

Gradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual Networks.

Neural Comput 2019 03 15;31(3):477-502. Epub 2019 Jan 15.

Google, Mountain View, CA 94043, U.S.A.

We analyze algorithms for approximating a function f(x)=Φx mapping Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01164DOI Listing
March 2019
1 Read

Scalable and Flexible Unsupervised Feature Selection.

Neural Comput 2019 03 15;31(3):517-537. Epub 2019 Jan 15.

Center for Optical Imagery Analysis and Learning, Northwestern Polytechnical University, Xi'an 710072, China

Recently, graph-based unsupervised feature selection algorithms (GUFS) have been shown to efficiently handle prevalent high-dimensional unlabeled data. One common drawback associated with existing graph-based approaches is that they tend to be time-consuming and in need of large storage, especially when faced with the increasing size of data. Research has started using anchors to accelerate graph-based learning model for feature selection, while the hard linear constraint between the data matrix and the lower-dimensional representation is usually overstrict in many applications. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01163DOI Listing
March 2019
2 Reads

Forgetting Memories and Their Attractiveness.

Authors:
Enzo Marinari

Neural Comput 2019 03 15;31(3):503-516. Epub 2019 Jan 15.

Dipartimento di Fisica, Sapienza Università di Roma; INFN Sezione di Roma 1; and Nanotech-CNR, UOS di Roma, 00185 Roma, Italy

We study numerically the memory that forgets, introduced in 1986 by Parisi by bounding the synaptic strength, with a mechanism that avoids confusion; allows remembering the pattern learned more recently; and has a physiologically very well-defined meaning. We analyze a number of features of this learning for a finite number of neurons and finite number of patterns. We discuss how the system behaves in the large but finite Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01162DOI Listing
March 2019
2 Reads

Dynamic Computational Model of the Human Spinal Cord Connectome.

Neural Comput 2019 02 21;31(2):388-416. Epub 2018 Dec 21.

Department of Neurosurgery, Beth Israel Deaconess Medical Center, Boston, MA 02215, U.S.A.

Connectomes abound, but few for the human spinal cord. Using anatomical data in the literature, we constructed a draft connectivity map of the human spinal cord connectome, providing a template for the many calibrations of specialized behavior to be overlaid on it and the basis for an initial computational model. A thorough literature review gleaned cell types, connectivity, and connection strength indications. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01159DOI Listing
February 2019
1 Read

Functional Diversity in the Retina Improves the Population Code.

Neural Comput 2019 02 21;31(2):270-311. Epub 2018 Dec 21.

Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.; Department of Physics, Ecole Normale Supérieure, 75005 Paris; Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University, 75231 Paris; Université Paris Diderot Sorbonne Paris Cité, 75031 Paris; Sorbonne Universités UPMC Université Paris 6, 75005 Paris, France; CNRS

Within a given brain region, individual neurons exhibit a wide variety of different feature selectivities. Here, we investigated the impact of this extensive functional diversity on the population neural code. Our approach was to build optimal decoders to discriminate among stimuli using the spiking output of a real, measured neural population and compare its performance against a matched, homogeneous neural population with the same number of cells and spikes. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0115
Publisher Site
http://dx.doi.org/10.1162/neco_a_01158DOI Listing
February 2019
15 Reads

First Passage Time Memory Lifetimes for Simple, Multistate Synapses: Beyond the Eigenvector Requirement.

Authors:
Terry Elliott

Neural Comput 2019 01 21;31(1):8-67. Epub 2018 Dec 21.

Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.

Models of associative memory with discrete-strength synapses are palimpsests, learning new memories by forgetting old ones. Memory lifetimes can be defined by the mean first passage time (MFPT) for a perceptron's activation to fall below firing threshold. By imposing the condition that the vector of possible strengths available to a synapse is a left eigenvector of the stochastic matrix governing transitions in strength, we previously derived results for MFPTs and first passage time (FPT) distributions in models with simple, multistate synapses. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01147DOI Listing
January 2019
1 Read

Accelerating Nonnegative Matrix Factorization Algorithms Using Extrapolation.

Neural Comput 2019 02 21;31(2):417-439. Epub 2018 Dec 21.

Department of Mathematics and Operational Research, Faculté Polytechnique, Université de Mons, 7000 Mons, Belgium

We propose a general framework to accelerate significantly the algorithms for nonnegative matrix factorization (NMF). This framework is inspired from the extrapolation scheme used to accelerate gradient methods in convex optimization and from the method of parallel tangents. However, the use of extrapolation in the context of the exact coordinate descent algorithms tackling the nonconvex NMF problems is novel. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01157DOI Listing
February 2019
1 Read

Learning Invariant Features in Modulatory Networks through Conflict and Ambiguity.

Neural Comput 2019 02 21;31(2):344-387. Epub 2018 Dec 21.

Department of Computer Science and Department of Psychology and Neuroscience Graduate Program, University of Southern California, Los Angeles, CA 90089, U.S.A.

This work lays the foundation for a framework of cortical learning based on the idea of a competitive column, which is inspired by the functional organization of neurons in the cortex. A column describes a prototypical organization for neurons that gives rise to an ability to learn scale, rotation, and translation-invariant features. This is empowered by a recently developed learning rule, conflict learning, which enables the network to learn over both driving and modulatory feedforward, feedback, and lateral inputs. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0115
Publisher Site
http://dx.doi.org/10.1162/neco_a_01156DOI Listing
February 2019
12 Reads

Calculating the Mutual Information between Two Spike Trains.

Authors:
Conor Houghton

Neural Comput 2019 02 21;31(2):330-343. Epub 2018 Dec 21.

Computational Neuroscience Unit, School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, Avon BS8 1UB, UK

It is difficult to estimate the mutual information between spike trains because established methods require more data than are usually available. Kozachenko-Leonenko estimators promise to solve this problem but include a smoothing parameter that must be set. We propose here that the smoothing parameter can be selected by maximizing the estimated unbiased mutual information. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01155DOI Listing
February 2019
2 Reads

Modeling the Correlated Activity of Neural Populations: A Review.

Neural Comput 2019 02 21;31(2):233-269. Epub 2018 Dec 21.

Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France

The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simultaneously using advanced experimental techniques with single-spike resolution and to relate these correlations to function and behavior. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0115
Publisher Site
http://dx.doi.org/10.1162/neco_a_01154DOI Listing
February 2019
25 Reads

Systems of Bounded Rational Agents with Information-Theoretic Constraints.

Neural Comput 2019 02 21;31(2):440-476. Epub 2018 Dec 21.

Institute of Neural Information Processing, Faculty of Engineering, Computer Science and Psychology, University of Ulm, Ulm, Baden-Württemberg, 89081 Germany

Specialization and hierarchical organization are important features of efficient collaboration in economical, artificial, and biological systems. Here, we investigate the hypothesis that both features can be explained by the fact that each entity of such a system is limited in a certain way. We propose an information-theoretic approach based on a free energy principle in order to computationally analyze systems of bounded rational agents that deal with such limitations optimally. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01153DOI Listing
February 2019
1 Read

Equivalence of Equilibrium Propagation and Recurrent Backpropagation.

Neural Comput 2019 02 21;31(2):312-329. Epub 2018 Dec 21.

University of Montreal, Montreal, Quebec, H3T 1N8, Canada, and CIFAR

Recurrent backpropagation and equilibrium propagation are supervised learning algorithms for fixed-point recurrent neural networks, which differ in their second phase. In the first phase, both algorithms converge to a fixed point that corresponds to the configuration where the prediction is made. In the second phase, equilibrium propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas recurrent backpropagation uses a side network to compute error derivatives iteratively. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01160DOI Listing
February 2019
1 Read

The Exact VC Dimension of the WiSARD n -Tuple Classifier.

Neural Comput 2019 01 21;31(1):176-207. Epub 2018 Nov 21.

Instituto Tércio Pacitti de Aplicações e Pesquisas Computacionais, Universidade Federal do Rio de Janeiro, Rio de Janeiro 21941-916, Brazil

The Wilkie, Stonham, and Aleksander recognition device (WiSARD) n -tuple classifier is a multiclass weightless neural network capable of learning a given pattern in a single step. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01149DOI Listing
January 2019
28 Reads

Supervised Dimensionality Reduction on Grassmannian for Image Set Recognition.

Neural Comput 2019 01 21;31(1):156-175. Epub 2018 Nov 21.

Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China; Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110016, China; and Key Lab of Image Understanding and Computer Vision, Liaoning Province, Shenyang 110016, China

Modeling videos and image sets by linear subspaces has achieved great success in various visual recognition tasks. However, subspaces constructed from visual data are always notoriously embedded in a high-dimensional ambient space, which limits the applicability of existing techniques. This letter explores the possibility of proposing a geometry-aware framework for constructing lower-dimensional subspaces with maximum discriminative power from high-dimensional subspaces in the supervised scenario. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0114
Publisher Site
http://dx.doi.org/10.1162/neco_a_01148DOI Listing
January 2019
17 Reads

Ten Simple Rules for Organizing and Running a Successful Intensive Two-Week Course.

Neural Comput 2019 01 21;31(1):1-7. Epub 2018 Nov 21.

Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01146DOI Listing
January 2019
2 Reads

Dual Neural Network Method for Solving Multiple Definite Integrals.

Neural Comput 2019 01 21;31(1):208-232. Epub 2018 Nov 21.

College of Sciences, Inner Mongolia University of Technology, Hohhot, Inner Mongolia 010051, China

This study, which examines a calculation method on the basis of a dual neural network for solving multiple definite integrals, addresses the problems of inefficiency, inaccuracy, and difficulty in finding solutions. First, the method offers a dual neural network method to construct a primitive function of the integral problem; it can approximate the primitive function of any given integrand with any precision. On this basis, a neural network calculation method that can solve multiple definite integrals whose upper and lower bounds are arbitrarily given is obtained with repeated applications of the dual neural network to construction of the primitive function. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01145DOI Listing
January 2019
21 Reads

Fixed Points of Competitive Threshold-Linear Networks.

Neural Comput 2019 01 21;31(1):94-155. Epub 2018 Nov 21.

School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639, U.S.A.

Threshold-linear networks (TLNs) are models of neural networks that consist of simple, perceptron-like neurons and exhibit nonlinear dynamics determined by the network's connectivity. The fixed points of a TLN, including both stable and unstable equilibria, play a critical role in shaping its emergent dynamics. In this work, we provide two novel characterizations for the set of fixed points of a competitive TLN: the first is in terms of a simple sign condition, while the second relies on the concept of domination. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01151DOI Listing
January 2019
13 Reads

Decoding of Neural Data Using Cohomological Feature Extraction.

Neural Comput 2019 01 21;31(1):68-93. Epub 2018 Nov 21.

Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, 7491 Trondheim, Norway

We introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological feature extraction. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01150DOI Listing
January 2019
12 Reads

Omitted Variable Bias in GLMs of Neural Spiking Activity.

Authors:
Ian H Stevenson

Neural Comput 2018 Oct 12:1-32. Epub 2018 Oct 12.

Department of Psychological Sciences, Department of Biomedical Engineering, and CT Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, U.S.A.

Generalized linear models (GLMs) have a wide range of applications in systems neuroscience describing the encoding of stimulus and behavioral variables, as well as the dynamics of single neurons. However, in any given experiment, many variables that have an impact on neural activity are not observed or not modeled. Here we demonstrate, in both theory and practice, how these omitted variables can result in biased parameter estimates for the effects that are included. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01138DOI Listing
October 2018
1 Read

Nonlinear Modeling of Neural Interaction for Spike Prediction Using the Staged Point-Process Model.

Neural Comput 2018 Oct 12:1-38. Epub 2018 Oct 12.

Department of Electronic and Computer Engineering and Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong SAR, 999077, China

Neurons communicate nonlinearly through spike activities. Generalized linear models (GLMs) describe spike activities with a cascade of a linear combination across inputs, a static nonlinear function, and an inhomogeneous Bernoulli or Poisson process, or Cox process if a self-history term is considered. This structure considers the output nonlinearity in spike generation but excludes the nonlinear interaction among input neurons. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0113
Publisher Site
http://dx.doi.org/10.1162/neco_a_01137DOI Listing
October 2018
15 Reads

The Information Bottleneck and Geometric Clustering.

Neural Comput 2019 03 12;31(3):596-612. Epub 2018 Oct 12.

Initiative for the Theoretical Sciences, CUNY Graduate Center, New York, NY 10016, U.S.A.

The information bottleneck (IB) approach to clustering takes a joint distribution P(X,Y) and maps the data Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01136DOI Listing
March 2019
3 Reads

Dense Associative Memory Is Robust to Adversarial Inputs.

Neural Comput 2018 Oct 12:1-17. Epub 2018 Oct 12.

Princeton Neuroscience Institute, Princeton, NJ 08540, U.S.A.

Deep neural networks (DNNs) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation so that the resulting deformed input is misclassified by the network. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01143DOI Listing
October 2018
1 Read

Bayesian Modeling of Motion Perception Using Dynamical Stochastic Textures.

Neural Comput 2018 Oct 12:1-38. Epub 2018 Oct 12.

Département de Mathématique et Applications, École Normale Supérieure, Paris 75005, France, and CNRS, France

A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The study presented here details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is derived in a set of axiomatic steps constrained by biological plausibility. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01142DOI Listing
October 2018
2 Reads

Limitations of Proposed Signatures of Bayesian Confidence.

Neural Comput 2018 Oct 12:1-28. Epub 2018 Oct 12.

Center for Neural Science and Department of Psychology, New York University, New York, NY 10003, U.S.A.

The Bayesian model of confidence posits that confidence reflects the observer's posterior probability that the decision is correct. Hangya, Sanders, and Kepecs (2016) have proposed that researchers can test the Bayesian model by deriving qualitative signatures of Bayesian confidence (i.e. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0114
Publisher Site
http://dx.doi.org/10.1162/neco_a_01141DOI Listing
October 2018
19 Reads