**29** Publications

- Page
**1**of**1**

Neural Comput 2021 Mar 29;33(3):590-673. Epub 2021 Jan 29.

Nonlinear Systems Laboratory, MIT, Cambridge, MA 02139, U.S.A.

Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We begin by introducing first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors-out of the many that will achieve perfect tracking or prediction-for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples, we consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian. We show that its Euler Lagrange equations lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1162/neco_a_01360 | DOI Listing |

March 2021

PLoS Comput Biol 2020 08 7;16(8):e1007659. Epub 2020 Aug 7.

The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America.

The brain consists of many interconnected networks with time-varying, partially autonomous activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable, reproducible state (or sequence of states) for its computations to make sense. We approached this problem from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable computations might be achieved despite biological complexity. Crucially, our analysis is not limited to analyzing the stability of fixed geometric objects in state space (e.g points, lines, planes), but rather the stability of state trajectories which may be complex and time-varying.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1371/journal.pcbi.1007659 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7446801 | PMC |

August 2020

PLoS One 2020 4;15(8):e0236661. Epub 2020 Aug 4.

Department of Mechanical Engineering, Department of Brain and Cognitive Sciences, and Nonlinear Systems Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States of America.

This paper considers the analysis of continuous time gradient-based optimization algorithms through the lens of nonlinear contraction theory. It demonstrates that in the case of a time-invariant objective, most elementary results on gradient descent based on convexity can be replaced by much more general results based on contraction. In particular, gradient descent converges to a unique equilibrium if its dynamics are contracting in any metric, with convexity of the cost corresponding to the special case of contraction in the identity metric. More broadly, contraction analysis provides new insights for the case of geodesically-convex optimization, wherein non-convex problems in Euclidean space can be transformed to convex ones posed over a Riemannian manifold. In this case, natural gradient descent converges to a unique equilibrium if it is contracting in any metric, with geodesic convexity of the cost corresponding to contraction in the natural metric. New results using semi-contraction provide additional insights into the topology of the set of optimizers in the case when multiple optima exist. Furthermore, they show how semi-contraction may be combined with specific additional information to reach broad conclusions about a dynamical system. The contraction perspective also easily extends to time-varying optimization settings and allows one to recursively build large optimization structures out of simpler elements. Extensions to natural primal-dual optimization and game-theoretic contexts further illustrate the potential reach of these new perspectives.

## Download full-text PDF |
Source |
---|---|

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0236661 | PLOS |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7402485 | PMC |

October 2020

Neural Comput 2020 01 8;32(1):36-96. Epub 2019 Nov 8.

Nonlinear Systems Laboratory, MIT, Cambridge, MA 02139, U.S.A.

We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signal, we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and their spatial mean. This noise reduction is in turn associated with a reduction in the smoothing of the loss function imposed by the stochastic gradient approximation. Through simulations on model nonconvex objectives, we demonstrate that coupling can stabilize higher noise levels and improve convergence. We provide a convergence analysis for strongly convex functions by deriving a bound on the expected deviation of the spatial mean of the agents from the global minimizer for an algorithm based on quorum sensing, the same algorithm with momentum, and the elastic averaging SGD (EASGD) algorithm. We discuss extensions to new algorithms that allow each agent to broadcast its current measure of success and shape the collective computation accordingly. We supplement our theoretical analysis with numerical experiments on convolutional neural networks trained on the CIFAR-10 data set, where we note a surprising regularizing property of EASGD even when applied to the non-distributed case. This observation suggests alternative second-order in time algorithms for nondistributed optimization that are competitive with momentum methods.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1162/neco_a_01248 | DOI Listing |

January 2020

Neural Comput 2018 05 22;30(5):1359-1393. Epub 2018 Mar 22.

Institute of Neuroinformatics, University and ETH Zurich, Zurich 8057, Switzerland

Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1162/NECO_a_01074 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5930080 | PMC |

May 2018

J R Soc Interface 2017 10;14(135)

Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.

Albatrosses can travel a thousand kilometres daily over the oceans. They extract their propulsive energy from horizontal wind shears with a flight strategy called dynamic soaring. While thermal soaring, exploited by birds of prey and sports gliders, consists of simply remaining in updrafts, extracting energy from horizontal winds necessitates redistributing momentum across the wind shear layer, by means of an intricate and dynamic flight manoeuvre. Dynamic soaring has been described as a sequence of half-turns connecting upwind climbs and downwind dives through the surface shear layer. Here, we investigate the optimal (minimum-wind) flight trajectory, with a combined numerical and analytic methodology. We show that contrary to current thinking, but consistent with GPS recordings of albatrosses, when the shear layer is thin the optimal trajectory is composed of small-angle, large-radius arcs. Essentially, the albatross is a flying sailboat, sequentially acting as sail and keel, and is most efficient when remaining crosswind at all times. Our analysis constitutes a general framework for dynamic soaring and more broadly energy extraction in complex winds. It is geared to improve the characterization of pelagic birds flight dynamics and habitat, and could enable the development of a robotic albatross that could travel with a virtually infinite range.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1098/rsif.2017.0496 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5665832 | PMC |

October 2017

Proc Biol Sci 2016 11;283(1843)

Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada N2L3G1.

We present a spiking neuron model of the motor cortices and cerebellum of the motor control system. The model consists of anatomically organized spiking neurons encompassing premotor, primary motor, and cerebellar cortices. The model proposes novel neural computations within these areas to control a nonlinear three-link arm model that can adapt to unknown changes in arm dynamics and kinematic structure. We demonstrate the mathematical stability of both forms of adaptation, suggesting that this is a robust approach for common biological problems of changing body size (e.g. during growth), and unexpected dynamic perturbations (e.g. when moving through different media, such as water or mud). To demonstrate the plausibility of the proposed neural mechanisms, we show that the model accounts for data across 19 studies of the motor control system. These data include a mix of behavioural and neural spiking activity, across subjects performing adaptive and static tasks. Given this proposed characterization of the biological processes involved in motor control of the arm, we provide several experimentally testable predictions that distinguish our model from previous work.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1098/rspb.2016.2134 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5136600 | PMC |

November 2016

Sci Rep 2015 Feb 12;5:8422. Epub 2015 Feb 12.

1] Nonlinear Systems Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA [2] Department of Mechanical Engineering and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA.

Controlling complex networked systems to desired states is a key research goal in contemporary science. Despite recent advances in studying the impact of network topology on controllability, a comprehensive understanding of the synergistic effect of network topology and individual dynamics on controllability is still lacking. Here we offer a theoretical study with particular interest in the diversity of dynamic units characterized by different types of individual dynamics. Interestingly, we find a global symmetry accounting for the invariance of controllability with respect to exchanging the densities of any two different types of dynamic units, irrespective of the network topology. The highest controllability arises at the global symmetry point, at which different types of dynamic units are of the same density. The lowest controllability occurs when all self-loops are either completely absent or present with identical weights. These findings further improve our understanding of network controllability and have implications for devising the optimal control of complex networked systems in a wide range of fields.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1038/srep08422 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4325315 | PMC |

February 2015

PLoS Comput Biol 2015 Jan 24;11(1):e1004039. Epub 2015 Jan 24.

Institute of Neuroinformatics, University and ETH Zurich, Zurich, Switzerland.

Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable 'expansion' dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1371/journal.pcbi.1004039 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4305289 | PMC |

January 2015

Nat Commun 2013 ;4:2002

Department of Physics, Center for Complex Network Research, Northeastern University, Boston, Massachusetts 02115, USA.

Our ability to control complex systems is a fundamental challenge of contemporary science. Recently introduced tools to identify the driver nodes, nodes through which we can achieve full control, predict the existence of multiple control configurations, prompting us to classify each node in a network based on their role in control. Accordingly a node is critical, intermittent or redundant if it acts as a driver node in all, some or none of the control configurations. Here we develop an analytical framework to identify the category of each node, leading to the discovery of two distinct control modes in complex systems: centralized versus distributed control. We predict the control mode for an arbitrary network and show that one can alter it through small structural perturbations. The uncovered bimodality has implications from network security to organizational research and offers new insights into the dynamics and control of complex systems.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1038/ncomms3002 | DOI Listing |

December 2013

C R Biol 2013 Jan 1;336(1):13-6. Epub 2013 Mar 1.

Collège de France, Chair of molecular immunology, 11, place Marcelin-Berthelot, 75005 Paris, France.

Quorum sensing is a decision-making process used by decentralized groups such as colonies of bacteria to trigger a coordinated behavior. The existence of decentralized coordinated behavior has also been suggested in the immune system. In this paper, we explore the possibility for quorum sensing mechanisms in the immune response. Cytokines are good candidates as inducer of quorum sensing effects on migration, proliferation and differentiation of immune cells. The existence of a quorum sensing mechanism should be explored experimentally. It may provide new perspectives into immune responses and could lead to new therapeutic strategies.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1016/j.crvi.2013.01.006 | DOI Listing |

January 2013

Proc Natl Acad Sci U S A 2013 Feb 28;110(7):2460-5. Epub 2013 Jan 28.

Center for Complex Network Research and Department of Physics, Northeastern University, Boston, MA 02115, USA.

A quantitative description of a complex system is inherently limited by our ability to estimate the system's internal state from experimentally accessible outputs. Although the simultaneous measurement of all internal variables, like all metabolite concentrations in a cell, offers a complete description of a system's state, in practice experimental access is limited to only a subset of variables, or sensors. A system is called observable if we can reconstruct the system's complete internal state from its outputs. Here, we adopt a graphical approach derived from the dynamical laws that govern a system to determine the sensors that are necessary to reconstruct the full internal state of a complex system. We apply this approach to biochemical reaction systems, finding that the identified sensors are not only necessary but also sufficient for observability. The developed approach can also identify the optimal sensors for target or partial observability, helping us reconstruct selected state variables from appropriately chosen outputs, a prerequisite for optimal biomarker design. Given the fundamental role observability plays in complex systems, these results offer avenues to systematically explore the dynamics of a wide range of natural, technological and socioeconomic systems.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1073/pnas.1215508110 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3574950 | PMC |

February 2013

Sci Rep 2013 15;3:1067. Epub 2013 Jan 15.

Center for Complex Network Research and Department of Physics, Northeastern University, Boston, MA, USA.

A dynamical system is controllable if by imposing appropriate external signals on a subset of its nodes, it can be driven from any initial state to any desired state in finite time. Here we study the impact of various network characteristics on the minimal number of driver nodes required to control a network. We find that clustering and modularity have no discernible impact, but the symmetries of the underlying matching problem can produce linear, quadratic or no dependence on degree correlation coefficients, depending on the nature of the underlying correlations. The results are supported by numerical simulations and help narrow the observed gap between the predicted and the observed number of driver nodes in real networks.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1038/srep01067 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3545232 | PMC |

June 2013

Phys Rev E Stat Nonlin Soft Matter Phys 2012 Oct 23;86(4 Pt 1):041914. Epub 2012 Oct 23.

Physics Department, ETH Zurich, CH-8093 Zurich, Switzerland.

Understanding synchronous and traveling-wave oscillations, particularly as they relate to transitions between different types of behavior, is a central problem in modeling biological systems. Here, we address this problem in the context of central pattern generators (CPGs). We use contraction theory to establish the global stability of a traveling-wave or synchronous oscillation, determined by the type of coupling. This opens the door to better design of coupling architectures to create the desired type of stable oscillations. We then use coupling that is both amplitude and phase dependent to create either globally stable synchronous or traveling-wave solutions. Using the CPG motor neuron network of a leech as an example, we show that while both traveling and synchronous oscillations can be achieved by several types of coupling, the transition between different types of behavior is dictated by a specific coupling architecture. In particular, it is only the "repulsive" but not the commonly used phase or rotational coupling that can explain the transition to high-frequency synchronous oscillations that have been observed in the heartbeat pattern generator of a leech. This shows that the overall dynamics of a CPG can be highly sensitive to the type of coupling used, even for coupling architectures that are widely believed to produce the same qualitative behavior.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1103/PhysRevE.86.041914 | DOI Listing |

October 2012

PLoS One 2012 27;7(9):e44459. Epub 2012 Sep 27.

Center for Complex Network Research and Department of Physics,Northeastern University, Boston, Massachusetts, United States of America.

We introduce the concept of control centrality to quantify the ability of a single node to control a directed weighted network. We calculate the distribution of control centrality for several real networks and find that it is mainly determined by the network's degree distribution. We show that in a directed network without loops the control centrality of a node is uniquely determined by its layer index or topological position in the underlying hierarchical structure of the network. Inspired by the deep relation between control centrality and hierarchical structure in a general directed network, we design an efficient attack strategy against the controllability of malicious networks.

## Download full-text PDF |
Source |
---|---|

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0044459 | PLOS |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3459977 | PMC |

March 2013

Neural Comput 2012 Aug 17;24(8):2033-52. Epub 2012 Apr 17.

Department of Neural Systems, Max Planck Institute for Brain Research, Frankfurt am Main, Hessen 60528, Germany.

Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily because their axons and dendrites are colocalized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this colocalization assumption is not valid. In this letter, we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron and even across different cortical areas. We prove by nonlinear contraction analysis and demonstrate by simulation that distributed WTA subsystems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with other.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1162/NECO_a_00304 | DOI Listing |

August 2012

Phys Rev E Stat Nonlin Soft Matter Phys 2011 Oct 24;84(4 Pt 1):041929. Epub 2011 Oct 24.

Department of Systems and Computer Engineering, University of Naples Federico II, Italy.

This paper discusses the interplay of symmetries and stability in the analysis and control of nonlinear dynamical systems and networks. Specifically, it combines standard results on symmetries and equivariance with recent convergence analysis tools based on nonlinear contraction theory and virtual dynamical systems. This synergy between structural properties (symmetries) and convergence properties (contraction) is illustrated in the contexts of network motifs arising, for example, in genetic networks, from invariance to environmental symmetries, and from imposing different patterns of synchrony in a network.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1103/PhysRevE.84.041929 | DOI Listing |

October 2011

Neural Comput 2011 Nov 6;23(11):2915-41. Epub 2011 Jul 6.

Department of Mathematics, Duke University, Durham, NC 27708, USA.

Learning and decision making in the brain are key processes critical to survival, and yet are processes implemented by nonideal biological building blocks that can impose significant error. We explore quantitatively how the brain might cope with this inherent source of error by taking advantage of two ubiquitous mechanisms, redundancy and synchronization. In particular we consider a neural process whose goal is to learn a decision function by implementing a nonlinear gradient dynamics. The dynamics, however, are assumed to be corrupted by perturbations modeling the error, which might be incurred due to limitations of the biology, intrinsic neuronal noise, and imperfect measurements. We show that error, and the associated uncertainty surrounding a learned solution, can be controlled in large part by trading off synchronization strength among multiple redundant neural systems against the noise amplitude. The impact of the coupling between such redundant systems is quantified by the spectrum of the network Laplacian, and we discuss the role of network topology in synchronization and in reducing the effect of noise. We discuss range of situations in which the mechanisms we model arise in brain science and draw attention to experimental evidence suggesting that cortical circuits capable of implementing the computations of interest here can be found on several scales. Finally, simulations comparing theoretical bounds to the relevant empirical quantities show that the theoretical estimates we derive can be tight.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1162/NECO_a_00183 | DOI Listing |

November 2011

Nature 2011 May;473(7346):167-73

Center for Complex Network Research, Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA.

The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. Although control theory offers mathematical tools for steering engineered and natural systems towards a desired state, a framework to control complex self-organized systems is lacking. Here we develop analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system's entire dynamics. We apply these tools to several real networks, finding that the number of driver nodes is determined mainly by the network's degree distribution. We show that sparse inhomogeneous networks, which emerge in many real complex systems, are the most difficult to control, but that dense and homogeneous networks can be controlled using a few driver nodes. Counterintuitively, we find that in both model and real systems the driver nodes tend to avoid the high-degree nodes.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1038/nature10011 | DOI Listing |

May 2011

Phys Rev E Stat Nonlin Soft Matter Phys 2010 Oct 25;82(4 Pt 1):041919. Epub 2010 Oct 25.

Department of Systems and Computer Engineering, University of Naples Federico II, Napoli, Italy.

In many natural synchronization phenomena, communication between individual elements occurs not directly but rather through the environment. One of these instances is bacterial quorum sensing, where bacteria release signaling molecules in the environment which in turn are sensed and used for population coordination. Extending this motivation to a general nonlinear dynamical system context, this paper analyzes synchronization phenomena in networks where communication and coupling between nodes are mediated by shared dynamical quantities, typically provided by the nodes' environment. Our model includes the case when the dynamics of the shared variables themselves cannot be neglected or indeed play a central part. Applications to examples from system biology illustrate the approach.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1103/PhysRevE.82.041919 | DOI Listing |

October 2010

Neural Comput 2011 Mar 16;23(3):735-73. Epub 2010 Dec 16.

Department of Neural Systems and Coding, Max Planck Institute for Brain Research, Frankfurt am Main, Hessen 60528, Germany

The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations while maintaining overall circuit stability. The issue of stability is all the more intriguing when one considers that the WTAs are expected to be densely distributed through the superficial layers and that they are at least partially interconnected. We consider how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large, stable networks. We use nonlinear contraction theory to establish conditions for stability in the fully nonlinear case and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multistable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1162/NECO_a_00091 | DOI Listing |

March 2011

PLoS Comput Biol 2010 Jan 15;6(1):e1000637. Epub 2010 Jan 15.

LPPA, Collège de France, Paris, France.

THE FUNCTIONAL ROLE OF SYNCHRONIZATION HAS ATTRACTED MUCH INTEREST AND DEBATE: in particular, synchronization may allow distant sites in the brain to communicate and cooperate with each other, and therefore may play a role in temporal binding, in attention or in sensory-motor integration mechanisms. In this article, we study another role for synchronization: the so-called "collective enhancement of precision". We argue, in a full nonlinear dynamical context, that synchronization may help protect interconnected neurons from the influence of random perturbations-intrinsic neuronal noise-which affect all neurons in the nervous system. More precisely, our main contribution is a mathematical proof that, under specific, quantified conditions, the impact of noise on individual interconnected systems and on their spatial mean can essentially be cancelled through synchronization. This property then allows reliable computations to be carried out even in the presence of significant noise (as experimentally found e.g., in retinal ganglion cells in primates). This in turn is key to obtaining meaningful downstream signals, whether in terms of precisely-timed interaction (temporal coding), population coding, or frequency coding. Similar concepts may be applicable to questions of noise and variability in systems biology.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1371/journal.pcbi.1000637 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2797083 | PMC |

January 2010

IEEE Trans Neural Netw 2009 Dec 13;20(12):1871-84. Epub 2009 Nov 13.

Electrical and Computer Engineering Department, University of Minnesota, Twin Cities, MN 55455 USA.

Distributed synchronization is known to occur at several scales in the brain, and has been suggested as playing a key functional role in perceptual grouping. State-of-the-art visual grouping algorithms, however, seem to give comparatively little attention to neural synchronization analogies. Based on the framework of concurrent synchronization of dynamical systems, simple networks of neural oscillators coupled with diffusive connections are proposed to solve visual grouping problems. The key idea is to embed the desired grouping properties in the choice of the diffusive couplings, so that synchronization of oscillators within each group indicates perceptual grouping of the underlying stimulative atoms, while desynchronization between groups corresponds to group segregation. Compared with state-of-the-art approaches, the same algorithm is shown to achieve promising results on several classical visual grouping problems, including point clustering, contour integration, and image segmentation.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1109/TNN.2009.2031678 | DOI Listing |

December 2009

Brain Res Bull 2008 Apr 14;75(6):717-22. Epub 2008 Feb 14.

Laboratoire de Physiologie de la Perception et de l'Action, CNRS, Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France.

In many ways, roboticians and the human brain are faced with the same problem: How does one control movement from a distance? In both cases, delays in the transmission of information play an important role, either because the distances to be covered are long (imagine controlling a robot arm on the moon from a command center on Earth), or because the underlying hardware is slow (nerves transmit information much more slowly than wires, radio waves or light). Delays have a debilitating effect on feedback control systems; causes and effects can bounce back and forth between distant sites, resulting in oscillatory behavior that can grow without bound. Control engineers have developed the concept of wave variables to combat this problem-by mimicking a flexible rod, wave variables constrain movement of the master and slave during the delay, ensuring stable overall behavior [G. Niemeyer, J.J.E. Slotine, Stable adaptive teleoperation, IEEE J. Ocean Eng. 16 (1991) 152-162; G. Niemeyer, J.J.E. Slotine, Toward bilateral internet teleoperation, in: Beyond Webcams, an Introduction to Online Robots, MIT Press, 2002]. Mother Nature may, however, deserve the patent on this solution. As we show here, the properties of nerves, muscles and sensory organs combine to form a natural wave variable control system that is immune to the problems of feedback delays.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1016/j.brainresbull.2008.01.019 | DOI Listing |

April 2008

Biol Cybern 2007 Oct 10;97(4):279-92. Epub 2007 Aug 10.

UMR 7152, Laboratoire de Physiologie de la Perception et de l'Action, CNRS-Collège de France, Paris, France.

Numerous brain regions encode variables using spatial distribution of activity in neuronal maps. Their specific geometry is usually explained by sensory considerations only. We provide here, for the first time, a theory involving the motor function of the superior colliculus to explain the geometry of its maps. We use six hypotheses in accordance with neurobiology to show that linear and logarithmic mappings are the only ones compatible with the generation of saccadic motor command. This mathematical proof gives a global coherence to the neurobiological studies on which it is based. Moreover, a new solution to the problem of saccades involving both colliculi is proposed. Comparative simulations show that it is more precise than the classical one.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1007/s00422-007-0172-2 | DOI Listing |

October 2007

Neural Netw 2007 Jan 9;20(1):62-77. Epub 2006 Oct 9.

Département d'Informatique, Ecole Normale Supérieure, 45 rue d'Ulm, 75005 Paris, France.

In a network of dynamical systems, concurrent synchronization is a regime where multiple groups of fully synchronized elements coexist. In the brain, concurrent synchronization may occur at several scales, with multiple "rhythms" interacting and functional assemblies combining neural oscillators of many different types. Mathematically, stable concurrent synchronization corresponds to convergence to a flow-invariant linear subspace of the global state space. We derive a general condition for such convergence to occur globally and exponentially. We also show that, under mild conditions, global convergence to a concurrently synchronized regime is preserved under basic system combinations such as negative feedback or hierarchies, so that stable concurrently synchronized aggregates of arbitrary size can be constructed. Robustnesss of stable concurrent synchronization to variations in individual dynamics is also quantified. Simple applications of these results to classical questions in systems neuroscience and robotics are discussed.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1016/j.neunet.2006.07.008 | DOI Listing |

January 2007

J Neurosci 2005 Mar;25(12):3181-91

Division of Health Sciences and Technology, Harvard Medical School/Massachusetts Institute of Technology (MIT), Boston, Massachusetts 02215, USA.

The mechanical stability properties of hindlimb-hindlimb wiping movements of the spinalized frog were examined. One hindlimb, the wiping limb, was implanted with 12 electromyographic (EMG) electrodes and attached to a robot that both recorded its trajectory and applied brief force perturbations. Cutaneous electrical stimulation was applied to the other hindlimb, the target limb, to evoke the hindlimb-hindlimb wiping reflex. Kinematic and EMG data from both unperturbed trials and trials in which a phasic perturbation was applied were collected from each spinalized frog. In the perturbed behaviors, we found that the initially large displacement attributable to the perturbation was compensated such that the final position was statistically indistinguishable from the unperturbed final position in all of the frogs, thus indicating the dynamic stability of these movements. This stability was robust to the range of perturbation amplitudes and nominal kinematic variation observed in this study. In addition, we investigated the extent to which intrinsic viscoelastic properties of the limb and proprioceptive feedback play a role in stabilizing the movements. No significant changes were seen in the EMGs after the perturbation. Furthermore, deafferentation of the wiping limb did not significantly affect the stability of the wiping reflex. Thus, we found that the intrinsic viscoelastic properties of the hindlimb conferred robust stability properties to the hindlimb-hindlimb wiping behavior. This stability mechanism may simplify the control required by the frog spinal motor systems to produce successful movements in an unpredictable and varying environment.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1523/JNEUROSCI.4945-04.2005 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6725085 | PMC |

March 2005

Biol Cybern 2005 Jan 10;92(1):38-53. Epub 2004 Dec 10.

Nonlinear Systems Laboratory, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA,

We describe a simple yet general method to analyze networks of coupled identical nonlinear oscillators and study applications to fast synchronization, locomotion, and schooling. Specifically, we use nonlinear contraction theory to derive exact and global (rather than linearized) results on synchronization, antisynchronization, and oscillator death. The method can be applied to coupled networks of various structures and arbitrary size. For oscillators with positive definite diffusion coupling, it can be shown that synchronization always occurs globally for strong enough coupling strengths, and an explicit upper bound on the corresponding threshold can be computed through eigenvalue analysis. The discussion also extends to the case when network structure varies abruptly and asynchronously, as in "flocks" of oscillators or dynamic elements.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1007/s00422-004-0527-x | DOI Listing |

January 2005

Neural Comput 2003 Mar;15(3):621-38

Howard Hughes Medical Institute, Department of Brain and Cognitive Sciences, MIT E25-210, Cambridge, MA 02139, U.S.A.

The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1162/089976603321192103 | DOI Listing |

March 2003