Publications by authors named "Matthew K Leonard"

31 Publications

Cortical Encoding of Manual Articulatory and Linguistic Features in American Sign Language.

Curr Biol 2020 Nov 3;30(22):4342-4351.e3. Epub 2020 Sep 3.

Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, USA; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA. Electronic address:

The fluent production of a signed language requires exquisite coordination of sensory, motor, and cognitive processes. Similar to speech production, language produced with the hands by fluent signers appears effortless but reflects the precise coordination of both large-scale and local cortical networks. The organization and representational structure of sensorimotor features underlying sign language phonology in these networks remains unknown. Here, we present a unique case study of high-density electrocorticography (ECoG) recordings from the cortical surface of profoundly deaf signer during awake craniotomy. While neural activity was recorded from sensorimotor cortex, the participant produced a large variety of movements in linguistic and transitional movement contexts. We found that at both single electrode and neural population levels, high-gamma activity reflected tuning for particular hand, arm, and face movements, which were organized along dimensions that are relevant for phonology in sign language. Decoding of manual articulatory features revealed a clear functional organization and population dynamics for these highly practiced movements. Furthermore, neural activity clearly differentiated linguistic and transitional movements, demonstrating encoding of language-relevant articulatory features. These results provide a novel and unique view of the fine-scale dynamics of complex and meaningful sensorimotor actions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cub.2020.08.048DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7674262PMC
November 2020

Non-invasive peripheral nerve stimulation selectively enhances speech category learning in adults.

NPJ Sci Learn 2020 6;5:12. Epub 2020 Aug 6.

Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA 15260 USA.

Adults struggle to learn non-native speech contrasts even after years of exposure. While laboratory-based training approaches yield learning, the optimal training conditions for maximizing speech learning in adulthood are currently unknown. Vagus nerve stimulation has been shown to prime adult sensory-perceptual systems towards plasticity in animal models. Precise temporal pairing with auditory stimuli can enhance auditory cortical representations with a high degree of specificity. Here, we examined whether sub-perceptual threshold transcutaneous vagus nerve stimulation (tVNS), paired with non-native speech sounds, enhances speech category learning in adults. Twenty-four native English-speakers were trained to identify non-native Mandarin tone categories. Across two groups, tVNS was paired with the tone categories that were easier- or harder-to-learn. A control group received no stimulation but followed an identical thresholding procedure as the intervention groups. We found that tVNS robustly enhanced speech category learning and retention of correct stimulus-response associations, but only when stimulation was paired with the easier-to-learn categories. This effect emerged rapidly, generalized to new exemplars, and was qualitatively different from the normal individual variability observed in hundreds of learners who have performed in the same task without stimulation. Electroencephalography recorded before and after training indicated no evidence of tVNS-induced changes in the sensory representation of auditory stimuli. These results suggest that paired-tVNS induces a temporally precise neuromodulatory signal that selectively enhances the perception and memory consolidation of perceptually salient categories.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41539-020-0070-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7410845PMC
August 2020

Interictal Epileptiform Discharges and the Quality of Human Intracranial Neurophysiology Data.

Front Hum Neurosci 2020 3;14:44. Epub 2020 Mar 3.

Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States.

Intracranial electroencephalography (IEEG) involves recording from electrodes placed directly onto the cortical surface or deep brain locations. It is performed on patients with medically refractory epilepsy, undergoing pre-surgical seizure localization. IEEG recordings, combined with advancements in computational capacity and analysis tools, have accelerated cognitive neuroscience. This Perspective describes a potential pitfall latent in many of these recordings by virtue of the subject population-namely interictal epileptiform discharges (IEDs), which can cause spurious results due to the contamination of normal neurophysiological signals by pathological waveforms related to epilepsy. We first discuss the nature of IED hazards, and why they deserve the attention of neurophysiology researchers. We then describe four general strategies used when handling IEDs (manual identification, automated identification, manual-automated hybrids, and ignoring by leaving them in the data), and discuss their pros, cons, and contextual factors. Finally, we describe current practices of human neurophysiology researchers worldwide based on a cross-sectional literature review and a voluntary survey. We put these results in the context of the listed strategies and make suggestions on improving awareness and clarity of reporting to enrich both data quality and communication in the field.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2020.00044DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7062638PMC
March 2020

Real-time decoding of question-and-answer speech dialogue using human cortical activity.

Nat Commun 2019 07 30;10(1):3096. Epub 2019 Jul 30.

Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA.

Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-019-10994-4DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6667454PMC
July 2019

The Encoding of Speech Sounds in the Superior Temporal Gyrus.

Neuron 2019 06;102(6):1096-1110

Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA. Electronic address:

The human superior temporal gyrus (STG) is critical for extracting meaningful linguistic features from speech input. Local neural populations are tuned to acoustic-phonetic features of all consonants and vowels and to dynamic cues for intonational pitch. These populations are embedded throughout broader functional zones that are sensitive to amplitude-based temporal cues. Beyond speech features, STG representations are strongly modulated by learned knowledge and perceptual goals. Currently, a major challenge is to understand how these features are integrated across space and time in the brain during natural speech comprehension. We present a theory that temporally recurrent connections within STG generate context-dependent phonological representations, spanning longer temporal sequences relevant for coherent percepts of syllables, words, and phrases.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuron.2019.04.023DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6602075PMC
June 2019

The Control of Vocal Pitch in Human Laryngeal Motor Cortex.

Cell 2018 06;174(1):21-31.e9

Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; UC Berkeley and UCSF Joint Program in Bioengineering, Berkeley, CA 94720, USA. Electronic address:

In speech, the highly flexible modulation of vocal pitch creates intonation patterns that speakers use to convey linguistic meaning. This human ability is unique among primates. Here, we used high-density cortical recordings directly from the human brain to determine the encoding of vocal pitch during natural speech. We found neural populations in bilateral dorsal laryngeal motor cortex (dLMC) that selectively encoded produced pitch but not non-laryngeal articulatory movements. This neural population controlled short pitch accents to express prosodic emphasis on a word in a sentence. Other larynx cortical representations controlling voicing and longer pitch phrase contours were found at separate sites. dLMC sites also encoded vocal pitch during a non-speech singing task. Finally, direct focal stimulation of dLMC evoked laryngeal movements and involuntary vocalization, confirming its causal role in feedforward control. Together, these results reveal the neural basis for the voluntary control of vocal pitch in human speech. VIDEO ABSTRACT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cell.2018.05.016DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6084806PMC
June 2018

Direct cortical stimulation of inferior frontal cortex disrupts both speech and music production in highly trained musicians.

Cogn Neuropsychol 2019 May - Jun;36(3-4):158-166. Epub 2018 May 22.

Department of Neurological Surgery, University of California, San Francisco , San Francisco , CA , USA.

Music and speech are human-specific behaviours that share numerous properties, including the fine motor skills required to produce them. Given these similarities, previous work has suggested that music and speech may at least partially share neural substrates. To date, much of this work has focused on perception, and has not investigated the neural basis of production, particularly in trained musicians. Here, we report two rare cases of musicians undergoing neurosurgical procedures, where it was possible to directly stimulate the left hemisphere cortex during speech and piano/guitar music production tasks. We found that stimulation to left inferior frontal cortex, including pars opercularis and ventral pre-central gyrus, caused slowing and arrest for both speech and music, and note sequence errors for music. Stimulation to posterior superior temporal cortex only caused production errors during speech. These results demonstrate partially dissociable networks underlying speech and music production, with a shared substrate in frontal regions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/02643294.2018.1472559DOI Listing
February 2020

Human Sensorimotor Cortex Control of Directly Measured Vocal Tract Movements during Vowel Production.

J Neurosci 2018 03 8;38(12):2955-2966. Epub 2018 Feb 8.

Department of Neurological Surgery,

During speech production, we make vocal tract movements with remarkable precision and speed. Our understanding of how the human brain achieves such proficient control is limited, in part due to the challenge of simultaneously acquiring high-resolution neural recordings and detailed vocal tract measurements. To overcome this challenge, we combined ultrasound and video monitoring of the supralaryngeal articulators (lips, jaw, and tongue) with electrocorticographic recordings from the cortical surface of 4 subjects (3 female, 1 male) to investigate how neural activity in the ventral sensory-motor cortex (vSMC) relates to measured articulator movement kinematics (position, speed, velocity, acceleration) during the production of English vowels. We found that high-gamma activity at many individual vSMC electrodes strongly encoded the kinematics of one or more articulators, but less so for vowel formants and vowel identity. Neural population decoding methods further revealed the structure of kinematic features that distinguish vowels. Encoding of articulator kinematics was sparsely distributed across time and primarily occurred during the time of vowel onset and offset. In contrast, encoding was low during the steady-state portion of the vowel, despite sustained neural activity at some electrodes. Significant representations were found for all kinematic parameters, but speed was the most robust. These findings enabled by direct vocal tract monitoring demonstrate novel insights into the representation of articulatory kinematic parameters encoded in the vSMC during speech production. Speaking requires precise control and coordination of the vocal tract articulators (lips, jaw, and tongue). Despite the impressive proficiency with which humans move these articulators during speech production, our understanding of how the brain achieves such control is rudimentary, in part because the movements themselves are difficult to observe. By simultaneously measuring speech movements and the neural activity that gives rise to them, we demonstrate how neural activity in sensorimotor cortex produces complex, coordinated movements of the vocal tract.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.2382-17.2018DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5864145PMC
March 2018

Neural correlates of sine-wave speech intelligibility in human frontal and temporal cortex.

Brain Lang 2018 12 4;187:83-91. Epub 2018 Feb 4.

Department of Neurological Surgery, University of California, San Francisco, 505 Parnassus Ave., San Francisco, CA 94143, United States; Center for Integrative Neuroscience, University of California, San Francisco, 675 Nelson Rising Ln., Room 535, San Francisco, CA 94158, United States; Weill Institute for Neurosciences, University of California, San Francisco, 675 Nelson Rising Ln., Room 535, San Francisco, CA 94158, United States. Electronic address:

Auditory speech comprehension is the result of neural computations that occur in a broad network that includes the temporal lobe auditory cortex and the left inferior frontal cortex. It remains unclear how representations in this network differentially contribute to speech comprehension. Here, we recorded high-density direct cortical activity during a sine-wave speech (SWS) listening task to examine detailed neural speech representations when the exact same acoustic input is comprehended versus not comprehended. Listeners heard SWS sentences (pre-exposure), followed by clear versions of the same sentences, which revealed the content of the sounds (exposure), and then the same SWS sentences again (post-exposure). Across all three task phases, high-gamma neural activity in the superior temporal gyrus was similar, distinguishing different words based on bottom-up acoustic features. In contrast, frontal regions showed a more pronounced and sudden increase in activity only when the input was comprehended, which corresponded with stronger representational separability among spatiotemporal activity patterns evoked by different words. We observed this effect only in participants who were not able to comprehend the stimuli during the pre-exposure phase, indicating a relationship between frontal high-gamma activity and speech understanding. Together, these results demonstrate that both frontal and temporal cortical networks are involved in spoken language understanding, and that under certain listening conditions, frontal regions are involved in discriminating speech sounds.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandl.2018.01.007DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6067983PMC
December 2018

Real-time classification of auditory sentences using evoked cortical activity in humans.

J Neural Eng 2018 06 30;15(3):036005. Epub 2018 Jan 30.

Department of Neurological Surgery, UC San Francisco, CA, United States of America. Center for Integrative Neuroscience, UC San Francisco, CA, United States of America. Graduate Program in Bioengineering, UC Berkeley-UC San Francisco, CA, United States of America.

Objective: Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces.

Approach: Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes.

Main Results: We observed single-trial sentence classification accuracies of [Formula: see text] or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting.

Significance: Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1741-2552/aaab6fDOI Listing
June 2018

Neural Encoding of Auditory Features during Music Perception and Imagery.

Cereb Cortex 2018 12;28(12):4222-4233

Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA.

Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bhx277DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6215461PMC
December 2018

Neurosurgical Patients as Human Research Subjects: Ethical Considerations in Intracranial Electrophysiology Research.

Neurosurgery 2018 07;83(1):29-37

Weill Institute for Neurosciences, Department of Neurosurgery, University of California San Francisco, San Francisco, California.

Intracranial electrical recordings and stimulation of neurosurgical patients have been central to the advancement of human neuroscience. The use of these methods has rapidly expanded over the last decade due to theoretical and technical advances, as well as the growing number of neurosurgical patients undergoing functional procedures for indications such as epilepsy, tumor resection, and movement disorders. These methods pose the potential for ethical conflict, as they involve basic neuroscientific research utilizing invasive procedures in human patients undergoing treatment for neurological illnesses. This review addresses technical aspects, clinical contexts, and issues of ethical concern, utilizing a framework that is informed by, but also departs from, existing bioethical literature on matters in clinical research. We conclude with proposals for improving informed consent processes to address potential problems specific to intracranial electrophysiology research, a general schema for scrutinizing research-related risk associated with different methods, and a call for the development of consensus to ensure continuing scientific progress alongside crucial patient protections in this promising area of human neuroscience.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/neuros/nyx361DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5777911PMC
July 2018

Chronic ambulatory electrocorticography from human speech cortex.

Neuroimage 2017 06 7;153:273-282. Epub 2017 Apr 7.

University of California, San Francisco, Department of Neurosurgery, San Francisco, CA 94143, United States; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94143, United States.

Direct intracranial recording of human brain activity is an important approach for deciphering neural mechanisms of cognition. Such recordings, usually made in patients with epilepsy undergoing inpatient monitoring for seizure localization, are limited in duration and depend on patients' tolerance for the challenges associated with recovering from brain surgery. Thus, typical intracranial recordings, similar to most non-invasive approaches in humans, provide snapshots of brain activity in acute, highly constrained settings, limiting opportunities to understand long timescale and natural, real-world phenomena. A new device for treating some forms of drug-resistant epilepsy, the NeuroPace RNS® System, includes a cranially-implanted neurostimulator and intracranial electrodes that continuously monitor brain activity and respond to incipient seizures with electrical counterstimulation. The RNS System can record epileptic brain activity over years, but whether it can record meaningful, behavior-related physiological responses has not been demonstrated. Here, in a human subject with electrodes implanted over high-level speech-auditory cortex (Wernicke's area; posterior superior temporal gyrus), we report that cortical evoked responses to spoken sentences are robust, selective to phonetic features, and stable over nearly 1.5 years. In a second subject with RNS System electrodes implanted over frontal cortex (Broca's area, posterior inferior frontal gyrus), we found that word production during a naming task reliably evokes cortical responses preceding speech onset. The spatiotemporal resolution, high signal-to-noise, and wireless nature of this system's intracranial recordings make it a powerful new approach to investigate the neural correlates of human cognition over long timescales in natural ambulatory settings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2017.04.008DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5482367PMC
June 2017

Perceptual restoration of masked speech in human cortex.

Nat Commun 2016 12 20;7:13619. Epub 2016 Dec 20.

Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, Room 535, San Francisco, California 94158, USA.

Humans are adept at understanding speech despite the fact that our natural listening environment is often filled with interference. An example of this capacity is phoneme restoration, in which part of a word is completely replaced by noise, yet listeners report hearing the whole word. The neurological basis for this unconscious fill-in phenomenon is unknown, despite being a fundamental characteristic of human hearing. Here, using direct cortical recordings in humans, we demonstrate that missing speech is restored at the acoustic-phonetic level in bilateral auditory cortex, in real-time. This restoration is preceded by specific neural activity patterns in a separate language area, left frontal cortex, which predicts the word that participants later report hearing. These results demonstrate that during speech perception, missing acoustic content is synthesized online from the integration of incoming sensory cues and the internal neural dynamics that bias word-level expectation and prediction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/ncomms13619DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5187421PMC
December 2016

Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity.

J Neural Eng 2016 10 3;13(5):056004. Epub 2016 Aug 3.

Department of Neurological Surgery, UC San Francisco, CA, USA. Center for Integrative Neuroscience, UC San Francisco, CA, USA. Graduate Program in Bioengineering, UC Berkeley-UC San Francisco, CA, USA.

Objective: The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography.

Approach: The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder.

Main Results: The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system.

Significance: These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1741-2560/13/5/056004DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5031534PMC
October 2016

The peri-Sylvian cortical network underlying single word repetition revealed by electrocortical stimulation and direct neural recordings.

Brain Lang 2019 06 19;193:58-72. Epub 2016 Jul 19.

Department of Neurological Surgery, University of California, San Francisco, United States; Center for Integrative Neuroscience, University of California, San Francisco, United States; Department of Physiology, University of California, San Francisco, United States. Electronic address:

Verbal repetition requires the coordination of auditory, memory, linguistic, and motor systems. To date, the basic dynamics of neural information processing in this deceptively simple behavior are largely unknown. Here, we examined the neural processes underlying verbal repetition using focal interruption (electrocortical stimulation) in 58 patients undergoing awake craniotomies, and neurophysiological recordings (electrocorticography) in 8 patients while they performed a single word repetition task. Electrocortical stimulation revealed that sub-components of the left peri-Sylvian network involved in single word repetition could be differentially interrupted, producing transient perceptual deficits, paraphasic errors, or speech arrest. Electrocorticography revealed the detailed spatio-temporal dynamics of cortical activation, involving a highly-ordered, but overlapping temporal progression of cortical high gamma (75-150Hz) activity throughout the peri-Sylvian cortex. We observed functionally distinct serial and parallel cortical processing corresponding to successive stages of general auditory processing (posterior superior temporal gyrus), speech-specific auditory processing (middle and posterior superior temporal gyrus), working memory (inferior frontal cortex), and motor articulation (sensorimotor cortex). Together, these methods reveal the dynamics of coordinated activity across peri-Sylvian cortex during verbal repetition.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandl.2016.06.001DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5790638PMC
June 2019

The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening.

Brain Lang 2015 Aug 11;147:66-75. Epub 2015 Jun 11.

Department of Neurological Surgery, University of California, San Francisco, 505 Parnassus Avenue, San Francisco, CA 94143, USA; Department of Physiology, University of California, San Francisco, 505 Parnassus Avenue, San Francisco, CA 94143, USA. Electronic address:

Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandl.2015.05.005DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4521602PMC
August 2015

Dynamic encoding of speech sequence probability in human temporal cortex.

J Neurosci 2015 May;35(18):7203-14

Department of Neurological Surgery, University of California, San Francisco, San Francisco, California 94158, Department of Physiology, University of California, San Francisco, San Francisco, California 94158,

Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.4100-14.2015DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4420784PMC
May 2015

Facial emotion recognition in agenesis of the corpus callosum.

J Neurodev Disord 2014 14;6(1):32. Epub 2014 Aug 14.

Division of Humanities and Social Sciences, Caltech, 91125 Pasadena, CA, USA.

Background: Impaired social functioning is a common symptom of individuals with developmental disruptions in callosal connectivity. Among these developmental conditions, agenesis of the corpus callosum provides the most extreme and clearly identifiable example of callosal disconnection. To date, deficits in nonliteral language comprehension, humor, theory of mind, and social reasoning have been documented in agenesis of the corpus callosum. Here, we examined a basic social ability as yet not investigated in this population: recognition of facial emotion and its association with social gaze.

Methods: Nine individuals with callosal agenesis and nine matched controls completed four tasks involving emotional faces: emotion recognition from upright and inverted faces, gender recognition, and passive viewing. Eye-tracking data were collected concurrently on all four tasks and analyzed according to designated facial regions of interest.

Results: Individuals with callosal agenesis exhibited impairments in recognizing emotions from upright faces, in particular lower accuracy for fear and anger, and these impairments were directly associated with diminished attention to the eye region. The callosal agenesis group exhibited greater consistency in emotion recognition across conditions (upright vs. inverted), with poorest performance for fear identification in both conditions. The callosal agenesis group also had atypical facial scanning (lower fractional dwell time in the eye region) during gender naming and passive viewing of faces, but they did not differ from controls on gender naming performance. The pattern of results did not differ when taking into account full-scale intelligence quotient or presence of autism spectrum symptoms.

Conclusions: Agenesis of the corpus callosum results in a pattern of atypical facial scanning characterized by diminished attention to the eyes. This pattern suggests that reduced callosal connectivity may contribute to the development and maintenance of emotion processing deficits involving reduced attention to others' eyes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/1866-1955-6-32DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4335392PMC
February 2015

Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language.

Cereb Cortex 2016 Mar 19;26(3):1015-26. Epub 2014 Nov 19.

Department of Linguistics.

One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bhu273DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4737603PMC
March 2016

Dynamic speech representations in the human temporal lobe.

Trends Cogn Sci 2014 Sep 3;18(9):472-9. Epub 2014 Jun 3.

Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, Room 535, San Francisco, CA 94158, USA. Electronic address:

Speech perception requires rapid integration of acoustic input with context-dependent knowledge. Recent methodological advances have allowed researchers to identify underlying information representations in primary and secondary auditory cortex and to examine how context modulates these representations. We review recent studies that focus on contextual modulations of neural activity in the superior temporal gyrus (STG), a major hub for spectrotemporal encoding. Recent findings suggest a highly interactive flow of information processing through the auditory ventral stream, including influences of higher-level linguistic and metalinguistic knowledge, even within individual areas. Such mechanisms may give rise to more abstract representations, such as those for words. We discuss the importance of characterizing representations of context-dependent and dynamic patterns of neural activity in the approach to speech perception research.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.tics.2014.05.001DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4149812PMC
September 2014

Neural stages of spoken, written, and signed word processing in beginning second language learners.

Front Hum Neurosci 2013 2;7:322. Epub 2013 Jul 2.

Department of Radiology, University of California San Diego, La Jolla, CA, USA ; Multimodal Imaging Laboratory, Department of Radiology, University of California San Diego, La Jolla, CA, USA.

WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2013.00322DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3698463PMC
July 2013

Neural language processing in adolescent first-language learners.

Cereb Cortex 2014 Oct 21;24(10):2772-83. Epub 2013 May 21.

Department of Linguistics.

The relation between the timing of language input and development of neural organization for language processing in adulthood has been difficult to tease apart because language is ubiquitous in the environment of nearly all infants. However, within the congenitally deaf population are individuals who do not experience language until after early childhood. Here, we investigated the neural underpinnings of American Sign Language (ASL) in 2 adolescents who had no sustained language input until they were approximately 14 years old. Using anatomically constrained magnetoencephalography, we found that recently learned signed words mainly activated right superior parietal, anterior occipital, and dorsolateral prefrontal areas in these 2 individuals. This spatiotemporal activity pattern was significantly different from the left fronto-temporal pattern observed in young deaf adults who acquired ASL from birth, and from that of hearing young adults learning ASL as a second language for a similar length of time as the cases. These results provide direct evidence that the timing of language experience over human development affects the organization of neural language processing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bht137DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4153811PMC
October 2014

Speech-specific tuning of neurons in human superior temporal gyrus.

Cereb Cortex 2014 Oct 16;24(10):2679-93. Epub 2013 May 16.

Department of Neurology.

How the brain extracts words from auditory signals is an unanswered question. We recorded approximately 150 single and multi-units from the left anterior superior temporal gyrus of a patient during multiple auditory experiments. Against low background activity, 45% of units robustly fired to particular spoken words with little or no response to pure tones, noise-vocoded speech, or environmental sounds. Many units were tuned to complex but specific sets of phonemes, which were influenced by local context but invariant to speaker, and suppressed during self-produced speech. The firing of several units to specific visual letters was correlated with their response to the corresponding auditory phonemes, providing the first direct neural evidence for phonological recoding during reading. Maximal decoding of individual phonemes and words identities was attained using firing rates from approximately 5 neurons within 200 ms after word onset. Thus, neurons in human superior temporal gyrus use sparse spatially organized population encoding of complex acoustic-phonetic features to help recognize auditory and visual words.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bht127DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4162511PMC
October 2014

Age-related changes in tissue signal properties within cortical areas important for word understanding in 12- to 19-month-old infants.

Cereb Cortex 2014 Jul 28;24(7):1948-55. Epub 2013 Feb 28.

Department of Radiology, Multimodal Imaging Laboratory, Kavli Institute for Brain and Mind, and.

Recently, our laboratory has shown that the neural mechanisms for encoding lexico-semantic information in adults operate functionally by 12-18 months of age within left frontotemporal cortices (Travis et al., 2011. Spatiotemporal neural dynamics of word understanding in 12- to 18-month-old-infants. Cereb Cortex. 8:1832-1839). However, there is minimal knowledge of the structural changes that occur within these and other cortical regions important for language development. To identify regional structural changes taking place during this important period in infant development, we examined age-related changes in tissue signal properties of gray matter (GM) and white matter (WM) intensity and contrast. T1-weighted surface-based measures were acquired from 12- to 19-month-old infants and analyzed using a general linear model. Significant age effects were observed for GM and WM intensity and contrast within bilateral inferior lateral and anterovental temporal regions, dorsomedial frontal, and superior parietal cortices. Region of interest (ROI) analyses revealed that GM and WM intensity and contrast significantly increased with age within the same left lateral temporal regions shown to generate lexico-semantic activity in infants and adults. These findings suggest that neurophysiological processes supporting linguistic and cognitive behaviors may develop before cellular and structural maturation is complete within associative cortices. These results have important implications for understanding the neurobiological mechanisms relating structural to functional brain development.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bht052DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4051897PMC
July 2014

Independence of early speech processing from word meaning.

Cereb Cortex 2013 Oct 8;23(10):2370-9. Epub 2012 Aug 8.

Department of Neurosciences.

We combined magnetoencephalography (MEG) with magnetic resonance imaging and electrocorticography to separate in anatomy and latency 2 fundamental stages underlying speech comprehension. The first acoustic-phonetic stage is selective for words relative to control stimuli individually matched on acoustic properties. It begins ∼60 ms after stimulus onset and is localized to middle superior temporal cortex. It was replicated in another experiment, but is strongly dissociated from the response to tones in the same subjects. Within the same task, semantic priming of the same words by a related picture modulates cortical processing in a broader network, but this does not begin until ∼217 ms. The earlier onset of acoustic-phonetic processing compared with lexico-semantic modulation was significant in each individual subject. The MEG source estimates were confirmed with intracranial local field potential and high gamma power responses acquired in 2 additional subjects performing the same task. These recordings further identified sites within superior temporal cortex that responded only to the acoustic-phonetic contrast at short latencies, or the lexico-semantic at long. The independence of the early acoustic-phonetic response from semantic context suggests a limited role for lexical feedback in early speech perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bhs228DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3767959PMC
October 2013

Signed words in the congenitally deaf evoke typical late lexicosemantic responses with no early visual responses in left superior temporal cortex.

J Neurosci 2012 Jul;32(28):9700-5

Department of Radiology, University of California, San Diego (UCSD), La Jolla, California 92093-0108, USA.

Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.1002-12.2012DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3418348PMC
July 2012

Language proficiency modulates the recruitment of non-classical language areas in bilinguals.

PLoS One 2011 Mar 24;6(3):e18240. Epub 2011 Mar 24.

Department of Cognitive Science, University of California San Diego, La Jolla, California, United States of America.

Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018240PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3063800PMC
March 2011

Spatiotemporal neural dynamics of word understanding in 12- to 18-month-old-infants.

Cereb Cortex 2011 Aug 5;21(8):1832-9. Epub 2011 Jan 5.

Department of Neurosciences, University of California, San Diego, La Jolla, CA 92093-0662, USA.

Learning words is central in human development. However, lacking clear evidence for how or where language is processed in the developing brain, it is unknown whether these processes are similar in infants and adults. Here, we use magnetoencephalography in combination with high-resolution structural magnetic resonance imaging to noninvasively estimate the spatiotemporal distribution of word-selective brain activity in 12- to 18-month-old infants. Infants watched pictures of common objects and listened to words that they understood. A subset of these infants also listened to familiar words compared with sensory control sounds. In both experiments, words evoked a characteristic event-related brain response peaking ∼400 ms after word onset, which localized to left frontotemporal cortices. In adults, this activity, termed the N400m, is associated with lexico-semantic encoding. Like adults, we find that the amplitude of the infant N400m is also modulated by semantic priming, being reduced to words preceded by a semantically related picture. These findings suggest that similar left frontotemporal areas are used for encoding lexico-semantic information throughout the life span, from the earliest stages of word learning. Furthermore, this ontogenetic consistency implies that the neurophysiological processes underlying the N400m may be important both for understanding already known words and for learning new words.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bhq259DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3138516PMC
August 2011

Multimodal imaging of repetition priming: Using fMRI, MEG, and intracranial EEG to reveal spatiotemporal profiles of word processing.

Neuroimage 2010 Nov 8;53(2):707-17. Epub 2010 Jul 8.

Department of Psychiatry, University of California, San Diego, CA, USA.

Repetition priming is a core feature of memory processing whose anatomical correlates remain poorly understood. In this study, we use advanced multimodal imaging (functional magnetic resonance imaging (fMRI) and magnetoencephalography; MEG) to investigate the spatiotemporal profile of repetition priming. We use intracranial electroencephalography (iEEG) to validate our fMRI/MEG measurements. Twelve controls completed a semantic judgment task with fMRI and MEG that included words presented once (new, 'N') and words that repeated (old, 'O'). Six patients with epilepsy completed the same task during iEEG recordings. Blood-oxygen level dependent (BOLD) responses for N vs. O words were examined across the cortical surface and within regions of interest. MEG waveforms for N vs. O words were estimated using a noise-normalized minimum norm solution, and used to interpret the timecourse of fMRI. Spatial concordance was observed between fMRI and MEG repetition effects from 350 to 450 ms within bilateral occipitotemporal and medial temporal, left prefrontal, and left posterior temporal cortex. Additionally, MEG revealed widespread sources within left temporoparietal regions, whereas fMRI revealed bilateral reductions in occipitotemporal and left superior frontal, and increases in inferior parietal, precuneus, and dorsolateral prefrontal activity. BOLD suppression in left posterior temporal, left inferior prefrontal, and right occipitotemporal cortex correlated with MEG repetition-related reductions. IEEG responses from all three regions supported the timecourse of MEG and localization of fMRI. Furthermore, iEEG decreases to repeated words were associated with decreased gamma power in several regions, providing evidence that gamma oscillations are tightly coupled to cognitive phenomena and reflect regional activations seen in the BOLD signal.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2010.06.069DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2930128PMC
November 2010