Publications by authors named "Jochen Triesch"

58 Publications

Active efficient coding explains the development of binocular vision and its failure in amblyopia.

Proc Natl Acad Sci U S A 2020 03 2;117(11):6156-6162. Epub 2020 Mar 2.

Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany;

The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here, we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a formulation of the active efficient coding theory, which proposes that eye movements as well as stimulus encoding are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to coordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1073/pnas.1908100117DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7084066PMC
March 2020

A Model of Brain Folding Based on Strong Local and Weak Long-Range Connectivity Requirements.

Cereb Cortex 2020 04;30(4):2434-2451

Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main D-60528, Germany.

Throughout the animal kingdom, the structure of the central nervous system varies widely from distributed ganglia in worms to compact brains with varying degrees of folding in mammals. The differences in structure may indicate a fundamentally different circuit organization. However, the folded brain most likely is a direct result of mechanical forces when considering that a larger surface area of cortex packs into the restricted volume provided by the skull. Here, we introduce a computational model that instead of modeling mechanical forces relies on dimension reduction methods to place neurons according to specific connectivity requirements. For a simplified connectivity with strong local and weak long-range connections, our model predicts a transition from separate ganglia through smooth brain structures to heavily folded brains as the number of cortical columns increases. The model reproduces experimentally determined relationships between metrics of cortical folding and its pathological phenotypes in lissencephaly, polymicrogyria, microcephaly, autism, and schizophrenia. This suggests that mechanical forces that are known to lead to cortical folding may synergistically contribute to arrangements that reduce wiring. Our model provides a unified conceptual understanding of gyrification linking cellular connectivity and macroscopic structures in large-scale neural network models of the brain.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bhz249DOI Listing
April 2020

Editorial: Linking experimental and computational connectomics.

Netw Neurosci 2019 1;3(4):902-904. Epub 2019 Sep 1.

Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany.

Large-scale in silico experimentation depends on the generation of connectomes beyond available anatomical structure. We suggest that linking research across the fields of experimental connectomics, theoretical neuroscience, and high-performance computing can enable a new generation of models bridging the gap between biophysical detail and global function. This Focus Feature on "Linking Experimental and Computational Connectomics" aims to bring together some examples from these domains as a step toward the development of more comprehensive generative models of multiscale connectomes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/netn_e_00108DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6777942PMC
September 2019

Autonomous Development of Active Binocular and Motion Vision Through Active Efficient Coding.

Front Neurorobot 2019 16;13:49. Epub 2019 Jul 16.

Frankfurt Institute for Advanced Studies, Frankfurt, Germany.

We present a model for the autonomous and simultaneous learning of active binocular and motion vision. The model is based on the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model learns how to efficiently encode the incoming visual signals generated by an object moving in 3-D through sparse coding. Simultaneously, it learns how to produce eye movements that further improve the efficiency of the sensory coding. This learning is driven by an intrinsic motivation to maximize the system's coding efficiency. We test our approach on the humanoid robot iCub using simulations. The model demonstrates self-calibration of accurate object fixation and tracking of moving objects. Our results show that the model keeps improving until it hits physical constraints such as camera or motor resolution, or limits on its internal coding capacity. Furthermore, we show that the emerging sensory tuning properties are in line with results on disparity, motion, and motion-in-depth tuning in the visual cortex of mammals. The model suggests that vergence and tracking eye movements can be viewed as fundamentally having the same objective of maximizing the coding efficiency of the visual system and that they can be learned and calibrated jointly through AEC.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnbot.2019.00049DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6646586PMC
July 2019

Robot End Effector Tracking Using Predictive Multisensory Integration.

Front Neurorobot 2018 16;12:66. Epub 2018 Oct 16.

Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong.

We propose a biologically inspired model that enables a humanoid robot to learn how to track its end effector by integrating visual and proprioceptive cues as it interacts with the environment. A key novel feature of this model is the incorporation of sensorimotor prediction, where the robot predicts the sensory consequences of its current body motion as measured by proprioceptive feedback. The robot develops the ability to perform smooth pursuit-like eye movements to track its hand, both in the presence and absence of visual input, and to track exteroceptive visual motions. Our framework makes a number of advances over past work. First, our model does not require a fiducial marker to indicate the robot hand explicitly. Second, it does not require the forward kinematics of the robot arm to be known. Third, it does not depend upon pre-defined visual feature descriptors. These are learned during interaction with the environment. We demonstrate that the use of prediction in multisensory integration enables the agent to incorporate the information from proprioceptive and visual cues better. The proposed model has properties that are qualitatively similar to the characteristics of human eye-hand coordination.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnbot.2018.00066DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6198278PMC
October 2018

EEG-triggered TMS reveals stronger brain state-dependent modulation of motor evoked potentials at weaker stimulation intensities.

Brain Stimul 2019 Jan - Feb;12(1):110-118. Epub 2018 Sep 21.

Department of Neurology & Stroke, and Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.

Background: Corticospinal excitability depends on the current brain state. The recent development of real-time EEG-triggered transcranial magnetic stimulation (EEG-TMS) allows studying this relationship in a causal fashion. Specifically, it has been shown that corticospinal excitability is higher during the scalp surface negative EEG peak compared to the positive peak of μ-oscillations in sensorimotor cortex, as indexed by larger motor evoked potentials (MEPs) for fixed stimulation intensity.

Objective: We further characterize the effect of μ-rhythm phase on the MEP input-output (IO) curve by measuring the degree of excitability modulation across a range of stimulation intensities. We furthermore seek to optimize stimulation parameters to enable discrimination of functionally relevant EEG-defined brain states.

Methods: A real-time EEG-TMS system was used to trigger MEPs during instantaneous brain-states corresponding to μ-rhythm surface positive and negative peaks with five different stimulation intensities covering an individually calibrated MEP IO curve in 15 healthy participants.

Results: MEP amplitude is modulated by μ-phase across a wide range of stimulation intensities, with larger MEPs at the surface negative peak. The largest relative MEP-modulation was observed for weak intensities, the largest absolute MEP-modulation for intermediate intensities. These results indicate a leftward shift of the MEP IO curve during the μ-rhythm negative peak.

Conclusion: The choice of stimulation intensity influences the observed degree of corticospinal excitability modulation by μ-phase. Lower stimulation intensities enable more efficient differentiation of EEG μ-phase-defined brain states.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.brs.2018.09.009DOI Listing
May 2019

Stage-Wise Learning of Reaching Using Little Prior Knowledge.

Front Robot AI 2018 1;5:110. Epub 2018 Oct 1.

CNRS, SIGMA Clermont, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France.

In some manipulation robotics environments, because of the difficulty of precisely modeling dynamics and computing features which describe well the variety of scene appearances, hand-programming a robot behavior is often intractable. Deep reinforcement learning methods partially alleviate this problem in that they can dispense with hand-crafted features for the state representation and do not need pre-computed dynamics. However, they often use prior information in the task definition in the form of shaping rewards which guide the robot toward goal state areas but require engineering or human supervision and can lead to sub-optimal behavior. In this work we consider a complex robot reaching task with a large range of initial object positions and initial arm positions and propose a new learning approach with minimal supervision. Inspired by developmental robotics, our method consists of a weakly-supervised stage-wise procedure of three tasks. First, the robot learns to fixate the object with a 2-camera system. Second, it learns hand-eye coordination by learning to fixate its end-effector. Third, using the knowledge acquired in the previous steps, it learns to reach the object at different positions and from a large set of initial robot joint angles. Experiments in a simulated environment show that our stage-wise framework yields similar reaching performances, compared with a supervised setting without using kinematic models, hand-crafted features, calibration parameters or supervised visual modules.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/frobt.2018.00110DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7806066PMC
October 2018

Competition for synaptic building blocks shapes synaptic plasticity.

Elife 2018 09 17;7. Epub 2018 Sep 17.

Max-Planck Institute for Brain Research, Frankfurt am Main, Germany.

Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.7554/eLife.37836DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6181566PMC
September 2018

Simulation of electromyographic recordings following transcranial magnetic stimulation.

J Neurophysiol 2018 11 5;120(5):2532-2541. Epub 2018 Jul 5.

Frankfurt Institute for Advanced Studies , Frankfurt , Germany.

Transcranial magnetic stimulation (TMS) is a technique that enables noninvasive manipulation of neural activity and holds promise in both clinical and basic research settings. The effect of TMS on the motor cortex is often measured by electromyography (EMG) recordings from a small hand muscle. However, the details of how TMS generates responses measured with EMG are not completely understood. We aim to develop a biophysically detailed computational model to study the potential mechanisms underlying the generation of EMG signals following TMS. Our model comprises a feed-forward network of cortical layer 2/3 cells, which drive morphologically detailed layer 5 corticomotoneuronal cells, which in turn project to a pool of motoneurons. EMG signals are modeled as the sum of motor unit action potentials. EMG recordings from the first dorsal interosseous muscle were performed in four subjects and compared with simulated EMG signals. Our model successfully reproduces several characteristics of the experimental data. The simulated EMG signals match experimental EMG recordings in shape and size, and change with stimulus intensity and contraction level as in experimental recordings. They exhibit cortical silent periods that are close to the biological values and reveal an interesting dependence on inhibitory synaptic transmission properties. Our model predicts several characteristics of the firing patterns of neurons along the entire pathway from cortical layer 2/3 cells down to spinal motoneurons and should be considered as a viable tool for explaining and analyzing EMG signals following TMS. NEW & NOTEWORTHY A biophysically detailed model of EMG signal generation following transcranial magnetic stimulation (TMS) is proposed. Simulated EMG signals match experimental EMG recordings in shape and amplitude. Motor-evoked potential and cortical silent period properties match experimental data. The model is a viable tool to analyze, explain, and predict EMG signals following TMS.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00626.2017DOI Listing
November 2018

Bridging structure and function: A model of sequence learning and prediction in primary visual cortex.

PLoS Comput Biol 2018 06 5;14(6):e1006187. Epub 2018 Jun 5.

Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.

Recent experiments have demonstrated that visual cortex engages in spatio-temporal sequence learning and prediction. The cellular basis of this learning remains unclear, however. Here we present a spiking neural network model that explains a recent study on sequence learning in the primary visual cortex of rats. The model posits that the sequence learning and prediction abilities of cortical circuits result from the interaction of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. It also reproduces changes in stimulus-evoked multi-unit activity during learning. Furthermore, it makes precise predictions regarding how training shapes network connectivity to establish its prediction ability. Finally, it predicts that the adapted connectivity gives rise to systematic changes in spontaneous network activity. Taken together, our model establishes a new conceptual bridge between the structure and function of cortical circuits in the context of sequence learning and prediction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pcbi.1006187DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6003695PMC
June 2018

Ongoing brain rhythms shape I-wave properties in a computational model.

Brain Stimul 2018 Jul - Aug;11(4):828-838. Epub 2018 Mar 20.

Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany.

Background: Responses to transcranial magnetic stimulation (TMS) are notoriously variable. Previous studies have observed a dependence of TMS-induced responses on ongoing brain activity, for instance sensorimotor rhythms. This suggests an opportunity for the development of more effective stimulation protocols through closed-loop TMS-EEG. However, it is not yet clear how features of ongoing activity affect the responses of cortical circuits to TMS.

Objective/hypothesis: Here we investigate the dependence of TMS-responses on power and phase of ongoing oscillatory activity in a computational model of TMS-induced I-waves.

Methods: The model comprises populations of cortical layer 2/3 (L2/3) neurons and a population of cortical layer 5 (L5) neurons and generates I-waves in response to TMS. Oscillatory input to the L2/3 neurons induces rhythmic fluctuations in activity of L5 neurons. TMS pulses are simulated at different phases and amplitudes of the ongoing rhythm.

Results: The model shows a robust dependence of I-wave properties on phase and power of ongoing rhythms, with the strongest response occurring for TMS at maximal L5 depolarization. The amount of phase-modulation depends on stimulation intensity, with stronger modulation for lower intensity.

Conclusion: The model predicts that responses to TMS are highly variable for low stimulation intensities if ongoing brain rhythms are not taken into account. Closed-loop TMS-EEG holds promise for obtaining more reliable TMS effects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.brs.2018.03.010DOI Listing
February 2019

Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding.

Front Neurorobot 2017 3;11:58. Epub 2017 Nov 3.

Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong, Hong Kong.

This paper investigates two types of eye movements: vergence and saccades. Vergence eye movements are responsible for bringing the images of the two eyes into correspondence, whereas saccades drive gaze to interesting regions in the scene. Control of both vergence and saccades develops during early infancy. To date, these two types of eye movements have been studied separately. Here, we propose a computational model of an active vision system that integrates these two types of eye movements. We hypothesize that incorporating a saccade strategy driven by bottom-up attention will benefit the development of vergence control. The integrated system is based on the active efficient coding framework, which describes the joint development of sensory-processing and eye movement control to jointly optimize the coding efficiency of the sensory system. In the integrated system, we propose a binocular saliency model to drive saccades based on learned binocular feature extractors, which simultaneously encode both depth and texture information. Saliency in our model also depends on the current fixation point. This extends prior work, which focused on monocular images and saliency measures that are independent of the current fixation. Our results show that the proposed saliency-driven saccades lead to better vergence performance and faster learning in the overall system than random saccades. Faster learning is significant because it indicates that the system actively selects inputs for the most effective learning. This work suggests that saliency-driven saccades provide a scaffold for the development of vergence control during infancy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnbot.2017.00058DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5675843PMC
November 2017

Personalized translational epilepsy research - Novel approaches and future perspectives: Part I: Clinical and network analysis approaches.

Epilepsy Behav 2017 11 13;76:13-18. Epub 2017 Sep 13.

Epilepsy Center Frankfurt Rhine-Main, Department of Neurology, Center of Neurology and Neurosurgery, Goethe University Frankfurt, 60528 Frankfurt, Germany; Epilepsy Center Marburg, Department of Neurology, Philipps-University Marburg, 35043 Marburg, Germany; Center for Personalized Translational Epilepsy Research (CePTER), 60528 Frankfurt, Germany(1).

Despite the availability of more than 15 new "antiepileptic drugs", the proportion of patients with pharmacoresistant epilepsy has remained constant at about 20-30%. Furthermore, no disease-modifying treatments shown to prevent the development of epilepsy following an initial precipitating brain injury or to reverse established epilepsy have been identified to date. This is likely in part due to the polyetiologic nature of epilepsy, which in turn requires personalized medicine approaches. Recent advances in imaging, pathology, genetics and epigenetics have led to new pathophysiological concepts and the identification of monogenic causes of epilepsy. In the context of these advances, the First International Symposium on Personalized Translational Epilepsy Research (1st ISymPTER) was held in Frankfurt on September 8, 2016, to discuss novel approaches and future perspectives for personalized translational research. These included new developments and ideas in a range of experimental and clinical areas such as deep phenotyping, quantitative brain imaging, EEG/MEG-based analysis of network dysfunction, tissue-based translational studies, innate immunity mechanisms, microRNA as treatment targets, functional characterization of genetic variants in human cell models and rodent organotypic slice cultures, personalized treatment approaches for monogenic epilepsies, blood-brain barrier dysfunction, therapeutic focal tissue modification, computational modeling for target and biomarker identification, and cost analysis in (monogenic) disease and its treatment. This report on the meeting proceedings is aimed at stimulating much needed investments of time and resources in personalized translational epilepsy research. Part I includes the clinical phenotyping and diagnostic methods, EEG network-analysis, biomarkers, and personalized treatment approaches. In Part II, experimental and translational approaches will be discussed (Bauer et al., 2017) [1].
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.yebeh.2017.06.041DOI Listing
November 2017

Personalized translational epilepsy research - Novel approaches and future perspectives: Part II: Experimental and translational approaches.

Epilepsy Behav 2017 11 14;76:7-12. Epub 2017 Sep 14.

Epilepsy Center Frankfurt Rhine-Main, Department of Neurology, Center of Neurology and Neurosurgery, Goethe University Frankfurt, 60528 Frankfurt, Germany; Epilepsy Center Marburg, Department of Neurology, Philipps-University Marburg, 35043 Marburg, Germany; Center for Personalized Translational Epilepsy Research (CePTER), 60528 Frankfurt, Germany(1). Electronic address:

Despite the availability of more than 15 new "antiepileptic drugs", the proportion of patients with pharmacoresistant epilepsy has remained constant at about 20-30%. Furthermore, no disease-modifying treatments shown to prevent the development of epilepsy following an initial precipitating brain injury or to reverse established epilepsy have been identified to date. This is likely in part due to the polyetiologic nature of epilepsy, which in turn requires personalized medicine approaches. Recent advances in imaging, pathology, genetics, and epigenetics have led to new pathophysiological concepts and the identification of monogenic causes of epilepsy. In the context of these advances, the First International Symposium on Personalized Translational Epilepsy Research (1st ISymPTER) was held in Frankfurt on September 8, 2016, to discuss novel approaches and future perspectives for personalized translational research. These included new developments and ideas in a range of experimental and clinical areas such as deep phenotyping, quantitative brain imaging, EEG/MEG-based analysis of network dysfunction, tissue-based translational studies, innate immunity mechanisms, microRNA as treatment targets, functional characterization of genetic variants in human cell models and rodent organotypic slice cultures, personalized treatment approaches for monogenic epilepsies, blood-brain barrier dysfunction, therapeutic focal tissue modification, computational modeling for target and biomarker identification, and cost analysis in (monogenic) disease and its treatment. This report on the meeting proceedings is aimed at stimulating much needed investments of time and resources in personalized translational epilepsy research. This Part II includes the experimental and translational approaches and a discussion of the future perspectives, while the diagnostic methods, EEG network analysis, biomarkers, and personalized treatment approaches were addressed in Part I [1].
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.yebeh.2017.06.040DOI Listing
November 2017

A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity.

PLoS Comput Biol 2017 Aug 2;13(8):e1005632. Epub 2017 Aug 2.

Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany.

The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pcbi.1005632DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5555713PMC
August 2017

A multi-scale computational model of the effects of TMS on motor cortex.

F1000Res 2016 10;5:1945. Epub 2016 Aug 10.

Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.

The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons' complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.12688/f1000research.9277.3DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5373428PMC
August 2016

Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.

PLoS One 2017 26;12(5):e0178683. Epub 2017 May 26.

Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany.

Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions - matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model's performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN's spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0178683PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5446191PMC
September 2017

Nonrandom network connectivity comes in pairs.

Netw Neurosci 2017 1;1(1):31-41. Epub 2017 Feb 1.

Frankfurt Institute for Advanced Studies (FIAS), Johann Wolfgang Goethe University, Frankfurt am Main, Germany.

Overrepresentation of bidirectional connections in local cortical networks has been repeatedly reported and is a focus of the ongoing discussion of nonrandom connectivity. Here we show in a brief mathematical analysis that in a network in which connection probabilities are symmetric in pairs, = , the occurrences of bidirectional connections and nonrandom structures are inherently linked; an overabundance of reciprocally connected pairs emerges necessarily when some pairs of neurons are more likely to be connected than others. Our numerical results imply that such overrepresentation can also be sustained when connection probabilities are only approximately symmetric.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/NETN_a_00004DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5869014PMC
February 2017

OpenEyeSim: A biomechanical model for simulation of closed-loop visual perception.

J Vis 2016 12;16(15):25

Frankfurt Institute for Advanced Studies, Frankfurt am Main,

We introduce OpenEyeSim, a detailed three-dimensional biomechanical model of the human extraocular eye muscles including a visualization of a virtual environment. The main purpose of OpenEyeSim is to serve as a platform for developing models of the joint learning of visual representations and eye-movement control in the perception-action cycle. The architecture and dynamic muscle properties are based on measurements of the human oculomotor system. We show that our model can reproduce different types of eye movements. Additionally, our model is able to calculate metabolic costs of eye movements. It is also able to simulate different eye disorders, such as different forms of strabismus. We propose OpenEyeSim as a platform for studying many of the complexities of oculomotor control and learning during normal and abnormal visual development.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1167/16.15.25DOI Listing
December 2016

An active-efficient-coding model of optokinetic nystagmus.

J Vis 2016 11;16(14):10

Department of Electrical and Computer Engineering, Hong Kong University of Science and Technology, Hong

Optokinetic nystagmus (OKN) is an involuntary eye movement responsible for stabilizing retinal images in the presence of relative motion between an observer and the environment. Fully understanding the development of OKN requires a neurally plausible computational model that accounts for the neural development and the behavior. To date, work in this area has been limited. We propose a neurally plausible framework for the joint development of disparity and motion tuning in the visual cortex and of optokinetic and vergence eye-movement behavior. To our knowledge, this framework is the first developmental model to describe the emergence of OKN in a behaving organism. Unlike past models, which were based on scalar models of overall activity in different neural areas, our framework models the development of the detailed connectivity both from the retinal input to the visual cortex and from the visual cortex to the motor neurons. This framework accounts for the importance of the development of normal vergence control and binocular vision in achieving normal monocular OKN behaviors. Because the model includes behavior, we can simulate the same perturbations as past experiments, such as artificially induced strabismus. The proposed model agrees both qualitatively and quantitatively with a number of findings from the literature on both binocular vision and the optokinetic reflex. Finally, our model makes quantitative predictions about OKN behavior using the same methods used to characterize OKN in the experimental literature.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1167/16.14.10DOI Listing
November 2016

Plasticity-Driven Self-Organization under Topological Constraints Accounts for Non-random Features of Cortical Synaptic Wiring.

PLoS Comput Biol 2016 Feb 11;12(2):e1004759. Epub 2016 Feb 11.

Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.

Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pcbi.1004759DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4750861PMC
February 2016

Precise Synaptic Efficacy Alignment Suggests Potentiation Dominated Learning.

Front Neural Circuits 2015 13;9:90. Epub 2016 Jan 13.

Department of Neuroscience, Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany.

Recent evidence suggests that parallel synapses from the same axonal branch onto the same dendritic branch have almost identical strength. It has been proposed that this alignment is only possible through learning rules that integrate activity over long time spans. However, learning mechanisms such as spike-timing-dependent plasticity (STDP) are commonly assumed to be temporally local. Here, we propose that the combination of temporally local STDP and a multiplicative synaptic normalization mechanism is sufficient to explain the alignment of parallel synapses. To address this issue, we introduce three increasingly complex models: First, we model the idealized interaction of STDP and synaptic normalization in a single neuron as a simple stochastic process and derive analytically that the alignment effect can be described by a so-called Kesten process. From this we can derive that synaptic efficacy alignment requires potentiation-dominated learning regimes. We verify these conditions in a single-neuron model with independent spiking activities but more realistic synapses. As expected, we only observe synaptic efficacy alignment for long-term potentiation-biased STDP. Finally, we explore how well the findings transfer to recurrent neural networks where the learning mechanisms interact with the correlated activity of the network. We find that due to the self-reinforcing correlations in recurrent circuits under STDP, alignment occurs for both long-term potentiation- and depression-biased STDP, because the learning will be potentiation dominated in both cases due to the potentiating events induced by correlated activity. This is in line with recent results demonstrating a dominance of potentiation over depression during waking and normalization during sleep. This leads us to predict that individual spine pairs will be more similar after sleep compared to after sleep deprivation. In conclusion, we show that synaptic normalization in conjunction with coordinated potentiation--in this case, from STDP in the presence of correlated pre- and post-synaptic activity--naturally leads to an alignment of parallel synapses.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fncir.2015.00090DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4711154PMC
June 2016

Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

PLoS Comput Biol 2015 Dec 29;11(12):e1004640. Epub 2015 Dec 29.

Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany.

Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pcbi.1004640DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4694925PMC
December 2015

Modeling TMS-induced I-waves in human motor cortex.

Prog Brain Res 2015 ;222:105-24

Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, Eberhard-Karls University Tübingen, Germany. Electronic address:

Despite many years of research, it is still unknown how exactly transcranial magnetic stimulation activates cortical circuits. A recent computational model by Rusu et al. (2014) has attempted to shed light on potential underlying mechanisms and has successfully explained key experimental findings on I-wave physiology. Here, we critically discuss this model, point out some of its shortcomings, and suggest a number of extensions that may be necessary for it to capture additional existing and emerging data on the physiology of I-waves.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/bs.pbr.2015.07.001DOI Listing
November 2016

Intrinsic motivations drive learning of eye movements: an experiment with human adults.

PLoS One 2015 16;10(3):e0118705. Epub 2015 Mar 16.

Laboratory of Computational Embodied Neuroscience, Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche (LOCEN-ISTC-CNR), Rome, Italy.

Intrinsic motivations drive the acquisition of knowledge and skills on the basis of novel or surprising stimuli or the pleasure to learn new skills. In so doing, they are different from extrinsic motivations that are mainly linked to drives that promote survival and reproduction. Intrinsic motivations have been implicitly exploited in several psychological experiments but, due to the lack of proper paradigms, they are rarely a direct subject of investigation. This article investigates how different intrinsic motivation mechanisms can support the learning of visual skills, such as "foveate a particular object in space", using a gaze contingency paradigm. In the experiment participants could freely foveate objects shown in a computer screen. Foveating each of two "button" pictures caused different effects: one caused the appearance of a simple image (blue rectangle) in unexpected positions, while the other evoked the appearance of an always-novel picture (objects or animals). The experiment studied how two possible intrinsic motivation mechanisms might guide learning to foveate one or the other button picture. One mechanism is based on the sudden, surprising appearance of a familiar image at unpredicted locations, and a second one is based on the content novelty of the images. The results show the comparative effectiveness of the mechanism based on image novelty, whereas they do not support the operation of the mechanism based on the surprising location of the image appearance. Interestingly, these results were also obtained with participants that, according to a post experiment questionnaire, had not understood the functions of the different buttons suggesting that novelty-based intrinsic motivation mechanisms might operate even at an unconscious level.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0118705PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4361314PMC
February 2016

Top-down influences on ambiguous perception: the role of stable and transient states of the observer.

Front Hum Neurosci 2014 8;8:979. Epub 2014 Dec 8.

Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University Frankfurt am Main, Germany.

The world as it appears to the viewer is the result of a complex process of inference performed by the brain. The validity of this apparently counter-intuitive assertion becomes evident whenever we face noisy, feeble or ambiguous visual stimulation: in these conditions, the state of the observer may play a decisive role in determining what is currently perceived. On this background, ambiguous perception and its amenability to top-down influences can be employed as an empirical paradigm to explore the principles of perception. Here we offer an overview of both classical and recent contributions on how stable and transient states of the observer can impact ambiguous perception. As to the influence of the stable states of the observer, we show that what is currently perceived can be influenced (1) by cognitive and affective aspects, such as meaning, prior knowledge, motivation, and emotional content and (2) by individual differences, such as gender, handedness, genetic inheritance, clinical conditions, and personality traits and by (3) learning and conditioning. As to the impact of transient states of the observer, we outline the effects of (4) attention and (5) voluntary control, which have attracted much empirical work along the history of ambiguous perception. In the huge literature on the topic we trace a difference between the observer's ability to control dominance (i.e., the maintenance of a specific percept in visual awareness) and reversal rate (i.e., the switching between two alternative percepts). Other transient states of the observer that have more recently drawn researchers' attention regard (6) the effects of imagery and visual working memory. (7) Furthermore, we describe the transient effects of prior history of perceptual dominance. (8) Finally, we address the currently available computational models of ambiguous perception and how they can take into account the crucial share played by the state of the observer in perceiving ambiguous displays.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2014.00979DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4259127PMC
December 2014

Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures.

Front Neuroanat 2014 5;8:125. Epub 2014 Nov 5.

Department of Neuroscience, Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany.

The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnana.2014.00125DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4220704PMC
November 2014

Robust development of synfire chains from multiple plasticity mechanisms.

Front Comput Neurosci 2014 30;8:66. Epub 2014 Jun 30.

Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany.

Biological neural networks are shaped by a large number of plasticity mechanisms operating at different time scales. How these mechanisms work together to sculpt such networks into effective information processing circuits is still poorly understood. Here we study the spontaneous development of synfire chains in a self-organizing recurrent neural network (SORN) model that combines a number of different plasticity mechanisms including spike-timing-dependent plasticity, structural plasticity, as well as homeostatic forms of plasticity. We find that the network develops an abundance of feed-forward motifs giving rise to synfire chains. The chains develop into ring-like structures, which we refer to as "synfire rings." These rings emerge spontaneously in the SORN network and allow for stable propagation of activity on a fast time scale. A single network can contain multiple non-overlapping rings suppressing each other. On a slower time scale activity switches from one synfire ring to another maintaining firing rate homeostasis. Overall, our results show how the interaction of multiple plasticity mechanisms might give rise to the robust formation of synfire chains in biological neural networks.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fncom.2014.00066DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4074894PMC
July 2014

Spike avalanches in vivo suggest a driven, slightly subcritical brain state.

Front Syst Neurosci 2014 24;8:108. Epub 2014 Jun 24.

Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany.

In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnsys.2014.00108DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4068003PMC
July 2014

Emergence of task-dependent representations in working memory circuits.

Front Comput Neurosci 2014 28;8:57. Epub 2014 May 28.

Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany ; Physics Department, Goethe University Frankfurt am Main, Germany.

A wealth of experimental evidence suggests that working memory circuits preferentially represent information that is behaviorally relevant. Still, we are missing a mechanistic account of how these representations come about. Here we provide a simple explanation for a range of experimental findings, in light of prefrontal circuits adapting to task constraints by reward-dependent learning. In particular, we model a neural network shaped by reward-modulated spike-timing dependent plasticity (r-STDP) and homeostatic plasticity (intrinsic excitability and synaptic scaling). We show that the experimentally-observed neural representations naturally emerge in an initially unstructured circuit as it learns to solve several working memory tasks. These results point to a critical, and previously unappreciated, role for reward-dependent learning in shaping prefrontal cortex activity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fncom.2014.00057DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4035833PMC
June 2014