2,370 results match your criteria Neural computation[Journal]


Multiclass Alpha Integration of Scores from Multiple Classifiers.

Neural Comput 2019 Feb 14:1-20. Epub 2019 Feb 14.

Universitat Politècnica de València, Instituto de Telecomunicaciones y Aplicaciones Multimedia, 46022 Valencia, Spain

Alpha integration methods have been used for integrating stochastic models and fusion in the context of detection (binary classification). Our work proposes separated score integration (SSI), a new method based on alpha integration to perform soft fusion of scores in multiclass classification problems, one of the most common problems in automatic classification. Theoretical derivation is presented to optimize the parameters of this method to achieve the least mean squared error (LMSE) or the minimum probability of error (MPE). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01169DOI Listing
February 2019

Decreasing the Size of the Restricted Boltzmann Machine.

Neural Comput 2019 Feb 14:1-22. Epub 2019 Feb 14.

Graduate School of Information Science and Technology, Department of Mathematical Informatics, University of Tokyo, Bunkyo-ku, Tokyo 113-8654, Japan

In this letter, we propose a method to decrease the number of hidden units of the restricted Boltzmann machine while avoiding a decrease in the performance quantified by the Kullback-Leibler divergence. Our algorithm is then demonstrated by numerical simulations. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01176DOI Listing
February 2019

Deconstructing Odorant Identity via Primacy in Dual Networks.

Neural Comput 2019 Feb 14:1-28. Epub 2019 Feb 14.

Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, U.S.A.

In the olfactory system, odor percepts retain their identity despite substantial variations in concentration, timing, and background. We study a novel strategy for encoding intensity-invariant stimulus identity that is based on representing relative rather than absolute values of stimulus features. For example, in what is known as the primacy coding model, odorant identities are represented by the conditions that some odorant receptors are activated more strongly than others. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01175DOI Listing
February 2019

Gated Orthogonal Recurrent Units: On Learning to Forget.

Neural Comput 2019 Feb 14:1-19. Epub 2019 Feb 14.

University of Montreal, Montreal H3T 1J4, Quebec, Canada

We present a novel recurrent neural network (RNN)-based model that combines the remembering ability of unitary evolution RNNs with the ability of gated RNNs to effectively forget redundant or irrelevant information in its memory. We achieve this by extending restricted orthogonal evolution RNNs with a gating mechanism similar to gated recurrent unit RNNs with a reset gate and an update gate. Our model is able to outperform long short-term memory, gated recurrent units, and vanilla unitary or orthogonal RNNs on several long-term-dependency benchmark tasks. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01174DOI Listing
February 2019

Biologically Realistic Mean-Field Models of Conductance-Based Networks of Spiking Neurons with Adaptation.

Neural Comput 2019 Feb 14:1-28. Epub 2019 Feb 14.

Unité de Neuroscience, Information et Complexité, CNRS FRE 3693, 91198 Gif sur Yvette, France, and European Institute for Theoretical Neuroscience, 75012 Paris, France

Accurate population models are needed to build very large-scale neural models, but their derivation is difficult for realistic networks of neurons, in particular when nonlinear properties are involved, such as conductance-based interactions and spike-frequency adaptation. Here, we consider such models based on networks of adaptive exponential integrate-and-fire excitatory and inhibitory neurons. Using a master equation formalism, we derive a mean-field model of such networks and compare it to the full network dynamics. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01173DOI Listing
February 2019

A Distributed Framework for the Construction of Transport Maps.

Neural Comput 2019 Feb 14:1-40. Epub 2019 Feb 14.

Department of Bioengineering, University of California, San Diego, La Jolla, CA 92093, U.S.A.

The need to reason about uncertainty in large, complex, and multimodal data sets has become increasingly common across modern scientific environments. The ability to transform samples from one distribution to another distribution enables the solution to many problems in machine learning (e.g. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01172DOI Listing
February 2019

Estimating Scale-Invariant Future in Continuous Time.

Neural Comput 2019 Feb 14:1-29. Epub 2019 Feb 14.

Center for Memory and Brain, Department of Psychological and Brain Sciences, Boston, MA 02215, U.S.A.

Natural learners must compute an estimate of future outcomes that follow from a stimulus in continuous time. Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially discounted future reward using the Bellman equation (model-free algorithms). An important drawback of model-based algorithms is that computational cost grows linearly with the amount of time to be simulated. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01171DOI Listing
February 2019

Filtering Compensation for Delays and Prediction Errors during Sensorimotor Control.

Neural Comput 2019 Feb 14:1-27. Epub 2019 Feb 14.

Institute of Information and Communication Technologies, Electronics and Applied Mathematics, University of Louvain, Louvain-la-Neuve 1348, Belgium

Compensating for sensorimotor noise and for temporal delays has been identified as a major function of the nervous system. However, the aspects have often been described separately in the frameworks of optimal cue combination or motor prediction during movement planning. But control-theoretic models suggest that these two operations are performed simultaneously, and mounting evidence supports that motor commands are based on sensory predictions rather than sensory states. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01170DOI Listing
February 2019

A Novel Optimization Framework to Improve the Computational Cost of Muscle Activation Prediction for a Neuromusculoskeletal System.

Neural Comput 2019 Mar 15;31(3):574-595. Epub 2019 Jan 15.

Department of Mechanical Engineering, Kyushu University, Nishi-ku, Fukuoka 819-0395, Japan

The high computational cost (CC) of neuromusculoskeletal modeling is usually considered a serious barrier in clinical applications. Different approaches have been developed to lessen CC and amplify the accuracy of muscle activation prediction based on forward and inverse analyses by applying different optimization algorithms. This study is aimed at proposing two novel approaches, inverse muscular dynamics with inequality constraints (IMDIC) and inverse-forward muscular dynamics with inequality constraints (IFMDIC), not only to reduce CC but also to amend the computational errors compared to the well-known approach of extended inverse dynamics (EID). Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0116
Publisher Site
http://dx.doi.org/10.1162/neco_a_01167DOI Listing
March 2019
4 Reads

Advancing System Performance with Redundancy: From Biological to Artificial Designs.

Neural Comput 2019 Mar 15;31(3):555-573. Epub 2019 Jan 15.

Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, U.S.A.

Redundancy is a fundamental characteristic of many biological processes such as those in the genetic, visual, muscular, and nervous systems, yet its driven mechanism has not been fully comprehended. Until recently, the only understanding of redundancy is as a mean to attain fault tolerance, which is reflected in the design of many man-made systems. On the contrary, our previous work on redundant sensing (RS) has demonstrated an example where redundancy can be engineered solely for enhancing accuracy and precision. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01166DOI Listing
March 2019
3 Reads

State-Space Representations of Deep Neural Networks.

Neural Comput 2019 Mar 15;31(3):538-554. Epub 2019 Jan 15.

Department of Mechanical Engineering, Pennsylvania State University, University Park, PA 16802, U.S.A.

This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of -many skip connections into network architectures, such as residual networks and additive dense networks, defines th order dynamical equations on the layer-wise transformations. Closed-form solutions for the state-space representations of general th order additive dense networks, where the concatenation operation is replaced by addition, as well as th order smooth networks, are found. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01165DOI Listing
March 2019
1 Read

Gradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual Networks.

Neural Comput 2019 Mar 15;31(3):477-502. Epub 2019 Jan 15.

Google, Mountain View, CA 94043, U.S.A.

We analyze algorithms for approximating a function mapping to using deep linear neural networks, that is, that learn a function parameterized by matrices and defined by . We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the least-squares matrix , in the case where the initial hypothesis has excess loss bounded by a small enough constant. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01164DOI Listing
March 2019
1 Read

Scalable and Flexible Unsupervised Feature Selection.

Neural Comput 2019 Mar 15;31(3):517-537. Epub 2019 Jan 15.

Center for Optical Imagery Analysis and Learning, Northwestern Polytechnical University, Xi'an 710072, China

Recently, graph-based unsupervised feature selection algorithms (GUFS) have been shown to efficiently handle prevalent high-dimensional unlabeled data. One common drawback associated with existing graph-based approaches is that they tend to be time-consuming and in need of large storage, especially when faced with the increasing size of data. Research has started using anchors to accelerate graph-based learning model for feature selection, while the hard linear constraint between the data matrix and the lower-dimensional representation is usually overstrict in many applications. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01163DOI Listing
March 2019
1 Read

Forgetting Memories and Their Attractiveness.

Authors:
Enzo Marinari

Neural Comput 2019 Mar 15;31(3):503-516. Epub 2019 Jan 15.

Dipartimento di Fisica, Sapienza Università di Roma; INFN Sezione di Roma 1; and Nanotech-CNR, UOS di Roma, 00185 Roma, Italy

We study numerically the memory that forgets, introduced in 1986 by Parisi by bounding the synaptic strength, with a mechanism that avoids confusion; allows remembering the pattern learned more recently; and has a physiologically very well-defined meaning. We analyze a number of features of this learning for a finite number of neurons and finite number of patterns. We discuss how the system behaves in the large but finite limit. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01162DOI Listing
March 2019
2 Reads

Dynamic Computational Model of the Human Spinal Cord Connectome.

Neural Comput 2019 02 21;31(2):388-416. Epub 2018 Dec 21.

Department of Neurosurgery, Beth Israel Deaconess Medical Center, Boston, MA 02215, U.S.A.

Connectomes abound, but few for the human spinal cord. Using anatomical data in the literature, we constructed a draft connectivity map of the human spinal cord connectome, providing a template for the many calibrations of specialized behavior to be overlaid on it and the basis for an initial computational model. A thorough literature review gleaned cell types, connectivity, and connection strength indications. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01159DOI Listing
February 2019
1 Read

Functional Diversity in the Retina Improves the Population Code.

Neural Comput 2019 02 21;31(2):270-311. Epub 2018 Dec 21.

Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.; Department of Physics, Ecole Normale Supérieure, 75005 Paris; Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University, 75231 Paris; Université Paris Diderot Sorbonne Paris Cité, 75031 Paris; Sorbonne Universités UPMC Université Paris 6, 75005 Paris, France; CNRS

Within a given brain region, individual neurons exhibit a wide variety of different feature selectivities. Here, we investigated the impact of this extensive functional diversity on the population neural code. Our approach was to build optimal decoders to discriminate among stimuli using the spiking output of a real, measured neural population and compare its performance against a matched, homogeneous neural population with the same number of cells and spikes. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0115
Publisher Site
http://dx.doi.org/10.1162/neco_a_01158DOI Listing
February 2019
7 Reads

First Passage Time Memory Lifetimes for Simple, Multistate Synapses: Beyond the Eigenvector Requirement.

Authors:
Terry Elliott

Neural Comput 2019 01 21;31(1):8-67. Epub 2018 Dec 21.

Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.

Models of associative memory with discrete-strength synapses are palimpsests, learning new memories by forgetting old ones. Memory lifetimes can be defined by the mean first passage time (MFPT) for a perceptron's activation to fall below firing threshold. By imposing the condition that the vector of possible strengths available to a synapse is a left eigenvector of the stochastic matrix governing transitions in strength, we previously derived results for MFPTs and first passage time (FPT) distributions in models with simple, multistate synapses. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01147DOI Listing
January 2019
1 Read

Accelerating Nonnegative Matrix Factorization Algorithms Using Extrapolation.

Neural Comput 2019 02 21;31(2):417-439. Epub 2018 Dec 21.

Department of Mathematics and Operational Research, Faculté Polytechnique, Université de Mons, 7000 Mons, Belgium

We propose a general framework to accelerate significantly the algorithms for nonnegative matrix factorization (NMF). This framework is inspired from the extrapolation scheme used to accelerate gradient methods in convex optimization and from the method of parallel tangents. However, the use of extrapolation in the context of the exact coordinate descent algorithms tackling the nonconvex NMF problems is novel. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01157DOI Listing
February 2019
1 Read

Learning Invariant Features in Modulatory Networks through Conflict and Ambiguity.

Neural Comput 2019 02 21;31(2):344-387. Epub 2018 Dec 21.

Department of Computer Science and Department of Psychology and Neuroscience Graduate Program, University of Southern California, Los Angeles, CA 90089, U.S.A.

This work lays the foundation for a framework of cortical learning based on the idea of a competitive column, which is inspired by the functional organization of neurons in the cortex. A column describes a prototypical organization for neurons that gives rise to an ability to learn scale, rotation, and translation-invariant features. This is empowered by a recently developed learning rule, conflict learning, which enables the network to learn over both driving and modulatory feedforward, feedback, and lateral inputs. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0115
Publisher Site
http://dx.doi.org/10.1162/neco_a_01156DOI Listing
February 2019
9 Reads

Calculating the Mutual Information between Two Spike Trains.

Authors:
Conor Houghton

Neural Comput 2019 02 21;31(2):330-343. Epub 2018 Dec 21.

Computational Neuroscience Unit, School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, Avon BS8 1UB, UK

It is difficult to estimate the mutual information between spike trains because established methods require more data than are usually available. Kozachenko-Leonenko estimators promise to solve this problem but include a smoothing parameter that must be set. We propose here that the smoothing parameter can be selected by maximizing the estimated unbiased mutual information. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01155DOI Listing
February 2019
2 Reads

Modeling the Correlated Activity of Neural Populations: A Review.

Neural Comput 2019 02 21;31(2):233-269. Epub 2018 Dec 21.

Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France

The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simultaneously using advanced experimental techniques with single-spike resolution and to relate these correlations to function and behavior. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0115
Publisher Site
http://dx.doi.org/10.1162/neco_a_01154DOI Listing
February 2019
12 Reads

Systems of Bounded Rational Agents with Information-Theoretic Constraints.

Neural Comput 2019 02 21;31(2):440-476. Epub 2018 Dec 21.

Institute of Neural Information Processing, Faculty of Engineering, Computer Science and Psychology, University of Ulm, Ulm, Baden-Württemberg, 89081 Germany

Specialization and hierarchical organization are important features of efficient collaboration in economical, artificial, and biological systems. Here, we investigate the hypothesis that both features can be explained by the fact that each entity of such a system is limited in a certain way. We propose an information-theoretic approach based on a free energy principle in order to computationally analyze systems of bounded rational agents that deal with such limitations optimally. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01153DOI Listing
February 2019
1 Read

Equivalence of Equilibrium Propagation and Recurrent Backpropagation.

Neural Comput 2019 02 21;31(2):312-329. Epub 2018 Dec 21.

University of Montreal, Montreal, Quebec, H3T 1N8, Canada, and CIFAR

Recurrent backpropagation and equilibrium propagation are supervised learning algorithms for fixed-point recurrent neural networks, which differ in their second phase. In the first phase, both algorithms converge to a fixed point that corresponds to the configuration where the prediction is made. In the second phase, equilibrium propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas recurrent backpropagation uses a side network to compute error derivatives iteratively. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01160DOI Listing
February 2019
1 Read

The Exact VC Dimension of the WiSARD n -Tuple Classifier.

Neural Comput 2019 01 21;31(1):176-207. Epub 2018 Nov 21.

Instituto Tércio Pacitti de Aplicações e Pesquisas Computacionais, Universidade Federal do Rio de Janeiro, Rio de Janeiro 21941-916, Brazil

The Wilkie, Stonham, and Aleksander recognition device (WiSARD) n -tuple classifier is a multiclass weightless neural network capable of learning a given pattern in a single step. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01149DOI Listing
January 2019
22 Reads

Supervised Dimensionality Reduction on Grassmannian for Image Set Recognition.

Neural Comput 2019 01 21;31(1):156-175. Epub 2018 Nov 21.

Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China; Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110016, China; and Key Lab of Image Understanding and Computer Vision, Liaoning Province, Shenyang 110016, China

Modeling videos and image sets by linear subspaces has achieved great success in various visual recognition tasks. However, subspaces constructed from visual data are always notoriously embedded in a high-dimensional ambient space, which limits the applicability of existing techniques. This letter explores the possibility of proposing a geometry-aware framework for constructing lower-dimensional subspaces with maximum discriminative power from high-dimensional subspaces in the supervised scenario. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0114
Publisher Site
http://dx.doi.org/10.1162/neco_a_01148DOI Listing
January 2019
9 Reads

Ten Simple Rules for Organizing and Running a Successful Intensive Two-Week Course.

Neural Comput 2019 01 21;31(1):1-7. Epub 2018 Nov 21.

Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01146DOI Listing
January 2019
2 Reads

Dual Neural Network Method for Solving Multiple Definite Integrals.

Neural Comput 2019 01 21;31(1):208-232. Epub 2018 Nov 21.

College of Sciences, Inner Mongolia University of Technology, Hohhot, Inner Mongolia 010051, China

This study, which examines a calculation method on the basis of a dual neural network for solving multiple definite integrals, addresses the problems of inefficiency, inaccuracy, and difficulty in finding solutions. First, the method offers a dual neural network method to construct a primitive function of the integral problem; it can approximate the primitive function of any given integrand with any precision. On this basis, a neural network calculation method that can solve multiple definite integrals whose upper and lower bounds are arbitrarily given is obtained with repeated applications of the dual neural network to construction of the primitive function. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01145DOI Listing
January 2019
11 Reads

Fixed Points of Competitive Threshold-Linear Networks.

Neural Comput 2019 01 21;31(1):94-155. Epub 2018 Nov 21.

School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639, U.S.A.

Threshold-linear networks (TLNs) are models of neural networks that consist of simple, perceptron-like neurons and exhibit nonlinear dynamics determined by the network's connectivity. The fixed points of a TLN, including both stable and unstable equilibria, play a critical role in shaping its emergent dynamics. In this work, we provide two novel characterizations for the set of fixed points of a competitive TLN: the first is in terms of a simple sign condition, while the second relies on the concept of domination. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01151DOI Listing
January 2019
7 Reads

Decoding of Neural Data Using Cohomological Feature Extraction.

Neural Comput 2019 01 21;31(1):68-93. Epub 2018 Nov 21.

Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, 7491 Trondheim, Norway

We introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological feature extraction. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01150DOI Listing
January 2019
8 Reads

Omitted Variable Bias in GLMs of Neural Spiking Activity.

Authors:
Ian H Stevenson

Neural Comput 2018 Oct 12:1-32. Epub 2018 Oct 12.

Department of Psychological Sciences, Department of Biomedical Engineering, and CT Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, U.S.A.

Generalized linear models (GLMs) have a wide range of applications in systems neuroscience describing the encoding of stimulus and behavioral variables, as well as the dynamics of single neurons. However, in any given experiment, many variables that have an impact on neural activity are not observed or not modeled. Here we demonstrate, in both theory and practice, how these omitted variables can result in biased parameter estimates for the effects that are included. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01138DOI Listing
October 2018
1 Read

Nonlinear Modeling of Neural Interaction for Spike Prediction Using the Staged Point-Process Model.

Neural Comput 2018 Oct 12:1-38. Epub 2018 Oct 12.

Department of Electronic and Computer Engineering and Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong SAR, 999077, China

Neurons communicate nonlinearly through spike activities. Generalized linear models (GLMs) describe spike activities with a cascade of a linear combination across inputs, a static nonlinear function, and an inhomogeneous Bernoulli or Poisson process, or Cox process if a self-history term is considered. This structure considers the output nonlinearity in spike generation but excludes the nonlinear interaction among input neurons. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0113
Publisher Site
http://dx.doi.org/10.1162/neco_a_01137DOI Listing
October 2018
9 Reads

The Information Bottleneck and Geometric Clustering.

Neural Comput 2019 Mar 12;31(3):596-612. Epub 2018 Oct 12.

Initiative for the Theoretical Sciences, CUNY Graduate Center, New York, NY 10016, U.S.A.

The information bottleneck (IB) approach to clustering takes a joint distribution and maps the data to cluster labels , which retain maximal information about (Tishby, Pereira, & Bialek, 1999 ). This objective results in an algorithm that clusters data points based on the similarity of their conditional distributions . This is in contrast to classic geometric clustering algorithms such as -means and gaussian mixture models (GMMs), which take a set of observed data points and cluster them based on their geometric (typically Euclidean) distance from one another. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01136DOI Listing
March 2019
3 Reads

Dense Associative Memory Is Robust to Adversarial Inputs.

Neural Comput 2018 Oct 12:1-17. Epub 2018 Oct 12.

Princeton Neuroscience Institute, Princeton, NJ 08540, U.S.A.

Deep neural networks (DNNs) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation so that the resulting deformed input is misclassified by the network. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01143DOI Listing
October 2018
1 Read

Bayesian Modeling of Motion Perception Using Dynamical Stochastic Textures.

Neural Comput 2018 Oct 12:1-38. Epub 2018 Oct 12.

Département de Mathématique et Applications, École Normale Supérieure, Paris 75005, France, and CNRS, France

A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The study presented here details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is derived in a set of axiomatic steps constrained by biological plausibility. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01142DOI Listing
October 2018
2 Reads

Limitations of Proposed Signatures of Bayesian Confidence.

Neural Comput 2018 Oct 12:1-28. Epub 2018 Oct 12.

Center for Neural Science and Department of Psychology, New York University, New York, NY 10003, U.S.A.

The Bayesian model of confidence posits that confidence reflects the observer's posterior probability that the decision is correct. Hangya, Sanders, and Kepecs (2016) have proposed that researchers can test the Bayesian model by deriving qualitative signatures of Bayesian confidence (i.e. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0114
Publisher Site
http://dx.doi.org/10.1162/neco_a_01141DOI Listing
October 2018
14 Reads

Multi-Instance Dimensionality Reduction via Sparsity and Orthogonality.

Neural Comput 2018 Oct 12:1-28. Epub 2018 Oct 12.

Department of Mathematics, Hong Kong Baptist University, Hong Kong, China

We study a multi-instance (MI) learning dimensionality-reduction algorithm through sparsity and orthogonality, which is especially useful for high-dimensional MI data sets. We develop a novel algorithm to handle both sparsity and orthogonality constraints that existing methods do not handle well simultaneously. Our main idea is to formulate an optimization problem where the sparse term appears in the objective function and the orthogonality term is formed as a constraint. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01140DOI Listing
October 2018
1 Read

Use of a Deep Belief Network for Small High-Level Abstraction Data Sets Using Artificial Intelligence with Rule Extraction.

Authors:
Yoichi Hayashi

Neural Comput 2018 Oct 12:1-18. Epub 2018 Oct 12.

Department of Computer Science, Meiji University, Kawasaki 214-8571, Japan

We describe a simple method to transfer from weights in deep neural networks (NNs) trained by a deep belief network (DBN) to weights in a backpropagation NN (BPNN) in the recursive-rule eXtraction (Re-RX) algorithm with J48graft (Re-RX with J48graft) and propose a new method to extract accurate and interpretable classification rules for rating category data sets. We apply this method to the Wisconsin Breast Cancer Data Set (WBCD), the Mammographic Mass Data Set, and the Dermatology Dataset, which are small, high-abstraction data sets with prior knowledge. After training these three data sets, our proposed rule extraction method was able to extract accurate and concise rules for deep NNs trained by a DBN. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0113
Publisher Site
http://dx.doi.org/10.1162/neco_a_01139DOI Listing
October 2018
9 Reads

Adaptive Gaussian Process Approximation for Bayesian Inference with Expensive Likelihood Functions.

Neural Comput 2018 Sep 14:1-23. Epub 2018 Sep 14.

Institute of Natural Sciences, School of Mathematical Sciences, and MOE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240, China

We consider Bayesian inference problems with computationally intensive likelihood functions. We propose a Gaussian process (GP)-based method to approximate the joint distribution of the unknown parameters and the data, built on recent work (Kandasamy, Schneider, & Póczos, 2015). In particular, we write the joint density approximately as a product of an approximate posterior density and an exponentiated GP surrogate. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01127DOI Listing
September 2018
1 Read

Applications of Recurrent Neural Networks in Environmental Factor Forecasting: A Review.

Neural Comput 2018 Sep 14:1-27. Epub 2018 Sep 14.

College of Information and Electrical Engineering, China Agricultural University, Beijing 10083, China; Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture Beijing 100125, China; and Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture, Beijing 100083, China

Analysis and forecasting of sequential data, key problems in various domains of engineering and science, have attracted the attention of many researchers from different communities. When predicting the future probability of events using time series, recurrent neural networks (RNNs) are an effective tool that have the learning ability of feedforward neural networks and expand their expression ability using dynamic equations. Moreover, RNNs are able to model several computational structures. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01134DOI Listing
September 2018
2 Reads

Tensor Representation of Topographically Organized Semantic Spaces.

Neural Comput 2018 Sep 14:1-22. Epub 2018 Sep 14.

Group of Cognitive Systems Modeling, Biophysics Section, Facultad de Ciencias, Universidad de la República, Montevideo 11400, Uruguay, and Physics Department, Washington College, Chestertown, MD 21620, U.S.A.

Human brains seem to represent categories of objects and actions as locations in a continuous semantic space across the cortical surface that reflects the similarity among categories. This vision of the semantic organization of information in the brain, suggested by recent experimental findings, is in harmony with the well-known topographically organized somatotopic, retinotopic, and tonotopic maps in the cerebral cortex. Here we show that these topographies can be operationally represented with context-dependent associative memories. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01132DOI Listing
September 2018
1 Read

Cross-Entropy Pruning for Compressing Convolutional Neural Networks.

Neural Comput 2018 Sep 14:1-22. Epub 2018 Sep 14.

School of Software, Dalian University of Technology, Dalian, Liaoning, China

The success of CNNs is accompanied by deep models and heavy storage costs. For compressing CNNs, we propose an efficient and robust pruning approach, cross-entropy pruning (CEP). Given a trained CNN model, connections were divided into groups in a group-wise way according to their corresponding output neurons. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0113
Publisher Site
http://dx.doi.org/10.1162/neco_a_01131DOI Listing
September 2018
12 Reads

Unconscious Biases in Neural Populations Coding Multiple Stimuli.

Neural Comput 2018 Sep 14:1-21. Epub 2018 Sep 14.

School of Psychology and School of Mathematical Sciences, University of Nottingham, Nottingham NH7 2RD, U.K.

Throughout the nervous system, information is commonly coded in activity distributed over populations of neurons. In idealized situations where a single, continuous stimulus is encoded in a homogeneous population code, the value of the encoded stimulus can be read out without bias. However, in many situations, multiple stimuli are simultaneously present; for example, multiple motion patterns might overlap. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01130DOI Listing
September 2018
1 Read

Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression.

Neural Comput 2018 Sep 14:1-23. Epub 2018 Sep 14.

Center for Neurorestoration and Neurotechnology, Rehabilitation R&D Service, Department of Veterans Affairs Medical Center, Providence, RI 02908; Carney Institute for Brain Science and School of Engineering, Brown University, Providence, RI 02912; Center for Neurotechnology and Neurorecovery, Neurology, Massachusetts General Hospital, Boston, MA 02114; and Neurology, Harvard Medical School, Boston, MA 02115, U.S.A.

Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities-specifically, the changes in the statistics of the data between training and testing data sets. This includes changes to the neural tuning profiles and baseline shifts in firing rates of recorded neurons, as well as nonphysiological noise. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01129DOI Listing
September 2018
1 Read

Circuit Polarity Effect of Cortical Connectivity, Activity, and Memory.

Authors:
Yoram Baram

Neural Comput 2018 Sep 14:1-35. Epub 2018 Sep 14.

Computer Science Department, Technion-Israel Institute of Technology, Haifa 32000, Israel

Experimental constraints have traditionally implied separate studies of different cortical functions, such as memory and sensory-motor control. Yet certain cortical modalities, while repeatedly observed and reported, have not been clearly identified with one cortical function or another. Specifically, while neuronal membrane and synapse polarities with respect to a certain potential value have been attracting considerable interest in recent years, the purposes of such polarities have largely remained a subject for speculation and debate. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0112
Publisher Site
http://dx.doi.org/10.1162/neco_a_01128DOI Listing
September 2018
8 Reads

Diplomats' Mystery Illness and Pulsed Radiofrequency/Microwave Radiation.

Neural Comput 2018 Sep 5:1-104. Epub 2018 Sep 5.

UC San Diego School of Medicine, La Jolla, CA 92093, U.S.A.

Importance: A mystery illness striking U.S. and Canadian diplomats to Cuba (and now China) "has confounded the FBI, the State Department and US intelligence agencies" (Lederman, Weissenstein, & Lee, 2017). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01133DOI Listing
September 2018
12 Reads

A Simple Model for Low Variability in Neural Spike Trains.

Neural Comput 2018 Aug 27:1-28. Epub 2018 Aug 27.

Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France

Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain this low variability is still lacking. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0112
Publisher Site
http://dx.doi.org/10.1162/neco_a_01125DOI Listing
August 2018
13 Reads

Improving Stock Closing Price Prediction Using Recurrent Neural Network and Technical Indicators.

Neural Comput 2018 10 27;30(10):2833-2854. Epub 2018 Aug 27.

Department of Automation, Tsinghua University, Beijing 100084, China

This study focuses on predicting stock closing prices by using recurrent neural networks (RNNs). A long short-term memory (LSTM) model, a type of RNN coupled with stock basic trading data and technical indicators, is introduced as a novel method to predict the closing price of the stock market. We realize dimension reduction for the technical indicators by conducting principal component analysis (PCA). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01124DOI Listing
October 2018
3 Reads

Convex Coupled Matrix and Tensor Completion.

Neural Comput 2018 Aug 27:1-33. Epub 2018 Aug 27.

Bioinformatics Center, Institute for Chemical Research, Kyoto University, Gokasho, Uji 611-0011, Japan, and Department of Computer Science, Aalto University, Espoo 02150 Finland

We propose a set of convex low-rank inducing norms for coupled matrices and tensors (hereafter referred to as coupled tensors), in which information is shared between the matrices and tensors through common modes. More specifically, we first propose a mixture of the overlapped trace norm and the latent norms with the matrix trace norm, and then, propose a completion model regularized using these norms to impute coupled tensors. A key advantage of the proposed norms is that they are convex and can be used to find a globally optimal solution, whereas existing methods for coupled learning are nonconvex. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0112
Publisher Site
http://dx.doi.org/10.1162/neco_a_01123DOI Listing
August 2018
4 Reads

Hexagonal Grid Fields Optimally Encode Transitions in Spatiotemporal Sequences.

Authors:
Nicolai Waniek

Neural Comput 2018 10 27;30(10):2691-2725. Epub 2018 Aug 27.

Neuroscientific System Theory, Technical University of Munich, 80333 Munich, Germany

Grid cells of the rodent entorhinal cortex are essential for spatial navigation. Although their function is commonly believed to be either path integration or localization, the origin or purpose of their hexagonal firing fields remains disputed. Here they are proposed to arise as an optimal encoding of transitions in sequences. Read More

View Article

Download full-text PDF

Source
https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_0112
Publisher Site
http://dx.doi.org/10.1162/neco_a_01122DOI Listing
October 2018
12 Reads

Autoregressive Point Processes as Latent State-Space Models: A Moment-Closure Approach to Fluctuations and Autocorrelations.

Neural Comput 2018 10 27;30(10):2757-2780. Epub 2018 Aug 27.

Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, U.K.

Modeling and interpreting spike train data is a task of central importance in computational neuroscience, with significant translational implications. Two popular classes of data-driven models for this task are autoregressive point-process generalized linear models (PPGLM) and latent state-space models (SSM) with point-process observations. In this letter, we derive a mathematical connection between these two classes of models. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01121DOI Listing
October 2018
1 Read