Publications by authors named "Alessandro Presacco"

24 Publications

  • Page 1 of 1

Selective Facial Muscle Activation with Acute and Chronic Multichannel Cuff Electrode Implantation in a Feline Model.

Ann Otol Rhinol Laryngol 2021 Jun 6:34894211023218. Epub 2021 Jun 6.

Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, CA, USA.

Objectives: Facial paralysis is a debilitating condition with substantial functional and psychological consequences. This feline-model study evaluates whether facial muscles can be selectively activated in acute and chronic implantation of 16-channel multichannel cuff electrodes (MCE).

Methods: Two cats underwent acute terminal MCE implantation experiments, 2 underwent chronic MCE implantation in uninjured facial nerves (FN) and tested for 6 months, and 2 underwent chronic MCE implantation experiments after FN transection injury and tested for 3 months. The MCE were wrapped around the main trunk of the skeletonized FN, and data collection consisted of EMG thresholds, amplitudes, and selectivity of muscle activation.

Results: In acute experimentation, activation of specific channels (ie, channels 1-3 and 6-8) resulted in selective activation of , whereas activation of other channels (ie, channels 4, 5, or 8) led to selective activation of with higher EMG amplitudes. MCE implantation yielded stable and selective facial muscle activation EMG thresholds and amplitudes up to a 5-month period. Modest selective muscle activation was furthermore obtained after a complete transection-reapproximating nerve injury after a 3-month recovery period and implantation reoperation. Chronic implantation of MCE did not lead to fibrosis on histology. Field steering was achieved to activate distinct facial muscles by sending simultaneous subthreshold currents to multiple channels, thus theoretically protecting against nerve damage from chronic electrical stimulation.

Conclusion: Our proof-of-concept results show the ability of an MCE, supplemented with field steering, to provide a degree of selective facial muscle stimulation in a feline model, even following nerve regeneration after FN injury.

Level Of Evidence: N/A.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/00034894211023218DOI Listing
June 2021

Exaggerated cortical representation of speech in older listeners: mutual information analysis.

J Neurophysiol 2020 10 2;124(4):1152-1164. Epub 2020 Sep 2.

Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland.

Aging is associated with an exaggerated representation of the speech envelope in auditory cortex. The relationship between this age-related exaggerated response and a listener's ability to understand speech in noise remains an open question. Here, information-theory-based analysis methods are applied to magnetoencephalography recordings of human listeners, investigating their cortical responses to continuous speech, using the novel nonlinear measure of phase-locked mutual information between the speech stimuli and cortical responses. The cortex of older listeners shows an exaggerated level of mutual information, compared with younger listeners, for both attended and unattended speakers. The mutual information peaks for several distinct latencies: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). For the late component, the neural enhancement of attended over unattended speech is affected by stimulus signal-to-noise ratio, but the direction of this dependency is reversed by aging. Critically, in older listeners and for the same late component, greater cortical exaggeration is correlated with decreased behavioral inhibitory control. This negative correlation also carries over to speech intelligibility in noise, where greater cortical exaggeration in older listeners is correlated with worse speech intelligibility scores. Finally, an age-related lateralization difference is also seen for the ∼100 ms latency peaks, where older listeners show a bilateral response compared with younger listeners' right lateralization. Thus, this information-theory-based analysis provides new, and less coarse-grained, results regarding age-related change in auditory cortical speech processing, and its correlation with cognitive measures, compared with related linear measures. Cortical representations of natural speech are investigated using a novel nonlinear approach based on mutual information. Cortical responses, phase-locked to the speech envelope, show an exaggerated level of mutual information associated with aging, appearing at several distinct latencies (∼50, ∼100, and ∼200 ms). Critically, for older listeners only, the ∼200 ms latency response components are correlated with specific behavioral measures, including behavioral inhibition and speech comprehension.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00002.2020DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7717162PMC
October 2020

High gamma cortical processing of continuous speech in younger and older listeners.

Neuroimage 2020 11 21;222:117291. Epub 2020 Aug 21.

(a)Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States; (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States; (e)Department of Biology, University of Maryland, College Park, Maryland, United States. Electronic address:

Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-locking to the acoustic stimulus at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover case, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed responses in the high gamma range of 70-200 Hz to continuous speech using neural source-localized reverse correlation and the corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-200 Hz band. Consistent with the relative insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a cortical origin with ∼40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-200 Hz carrier of the speech, and b) the 70-200 Hz temporal modulations in the spectral envelope of the speech stimulus. The response was dominantly driven by the envelope modulation, with a much weaker contribution from the carrier. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study did not find clear age-related differences in high gamma cortical responses to continuous speech. Cortical responses at FFR-like frequencies shared some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2020.117291DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7736126PMC
November 2020

Dynamic estimation of auditory temporal response functions via state-space models with Gaussian mixture process noise.

PLoS Comput Biol 2020 08 19;16(8):e1008172. Epub 2020 Aug 19.

Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America.

Estimating the latent dynamics underlying biological processes is a central problem in computational biology. State-space models with Gaussian statistics are widely used for estimation of such latent dynamics and have been successfully utilized in the analysis of biological data. Gaussian statistics, however, fail to capture several key features of the dynamics of biological processes (e.g., brain dynamics) such as abrupt state changes and exogenous processes that affect the states in a structured fashion. Although Gaussian mixture process noise models have been considered as an alternative to capture such effects, data-driven inference of their parameters is not well-established in the literature. The objective of this paper is to develop efficient algorithms for inferring the parameters of a general class of Gaussian mixture process noise models from noisy and limited observations, and to utilize them in extracting the neural dynamics that underlie auditory processing from magnetoencephalography (MEG) data in a cocktail party setting. We develop an algorithm based on Expectation-Maximization to estimate the process noise parameters from state-space observations. We apply our algorithm to simulated and experimentally-recorded MEG data from auditory experiments in the cocktail party paradigm to estimate the underlying dynamic Temporal Response Functions (TRFs). Our simulation results show that the richer representation of the process noise as a Gaussian mixture significantly improves state estimation and capturing the heterogeneity of the TRF dynamics. Application to MEG data reveals improvements over existing TRF estimation techniques, and provides a reliable alternative to current approaches for probing neural dynamics in a cocktail party scenario, as well as attention decoding in emerging applications such as smart hearing aids. Our proposed methodology provides a framework for efficient inference of Gaussian mixture process noise models, with application to a wide range of biological data with underlying heterogeneous and latent dynamics.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pcbi.1008172DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7485982PMC
August 2020

Real-Time Tracking of Magnetoencephalographic Neuromarkers during a Dynamic Attention-Switching Task.

Annu Int Conf IEEE Eng Med Biol Soc 2019 Jul;2019:4148-4151

In the last few years, a large number of experiments have been focused on exploring the possibility of using non-invasive techniques, such as electroencephalography (EEG) and magnetoencephalography (MEG), to identify auditory-related neuromarkers which are modulated by attention. Results from several studies where participants listen to a story narrated by one speaker, while trying to ignore a different story narrated by a competing speaker, suggest the feasibility of extracting neuromarkers that demonstrate enhanced phase locking to the attended speech stream. These promising findings have the potential to be used in clinical applications, such as EEG-driven hearing aids. One major challenge in achieving this goal is the need to devise an algorithm capable of tracking these neuromarkers in real-time when individuals are given the freedom to repeatedly switch attention among speakers at will. Here we present an algorithm pipeline that is designed to efficiently recognize changes of neural speech tracking during a dynamic-attention switching task and to use them as an input for a near real-time state-space model that translates these neuromarkers into attentional state estimates with a minimal delay. This algorithm pipeline was tested with MEG data collected from participants who had the freedom to change the focus of their attention between two speakers at will. Results suggest the feasibility of using our algorithm pipeline to track changes of attention in near-real time in a dynamic auditory scene.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC.2019.8857953DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7067200PMC
July 2019

Mutual information analysis of neural representations of speech in noise in the aging midbrain.

J Neurophysiol 2019 12 9;122(6):2372-2387. Epub 2019 Oct 9.

Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland.

Younger adults with normal hearing can typically understand speech in the presence of a competing speaker without much effort, but this ability to understand speech in challenging conditions deteriorates with age. Older adults, even with clinically normal hearing, often have problems understanding speech in noise. Earlier auditory studies using the frequency-following response (FFR), primarily believed to be generated by the midbrain, demonstrated age-related neural deficits when analyzed with traditional measures. Here we use a mutual information paradigm to analyze the FFR to speech (masked by a competing speech signal) by estimating the amount of stimulus information contained in the FFR. Our results show, first, a broadband informational loss associated with aging for both FFR amplitude and phase. Second, this age-related loss of information is more severe in higher-frequency FFR bands (several hundred hertz). Third, the mutual information between the FFR and the stimulus decreases as noise level increases for both age groups. Fourth, older adults benefit neurally, i.e., show a reduction in loss of information, when the speech masker is changed from meaningful (talker speaking a language that they can comprehend, such as English) to meaningless (talker speaking a language that they cannot comprehend, such as Dutch). This benefit is not seen in younger listeners, which suggests that age-related informational loss may be more severe when the speech masker is meaningful than when it is meaningless. In summary, as a method, mutual information analysis can unveil new results that traditional measures may not have enough statistical power to assess. Older adults, even with clinically normal hearing, often have problems understanding speech in noise. Auditory studies using the frequency-following response (FFR) have demonstrated age-related neural deficits with traditional methods. Here we use a mutual information paradigm to analyze the FFR to speech masked by competing speech. Results confirm those from traditional analysis but additionally show that older adults benefit neurally when the masker changes from a language that they comprehend to a language they cannot.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00270.2019DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6957367PMC
December 2019

Speech-in-noise representation in the aging midbrain and cortex: Effects of hearing loss.

PLoS One 2019 13;14(3):e0213899. Epub 2019 Mar 13.

Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, United States of America.

Age-related deficits in speech-in-noise understanding pose a significant problem for older adults. Despite the vast number of studies conducted to investigate the neural mechanisms responsible for these communication difficulties, the role of central auditory deficits, beyond peripheral hearing loss, remains unclear. The current study builds upon our previous work that investigated the effect of aging on normal-hearing individuals and aims to estimate the effect of peripheral hearing loss on the representation of speech in noise in two critical regions of the aging auditory pathway: the midbrain and cortex. Data from 14 hearing-impaired older adults were added to a previously published dataset of 17 normal-hearing younger adults and 15 normal-hearing older adults. The midbrain response, measured by the frequency-following response (FFR), and the cortical response, measured with the magnetoencephalography (MEG) response, were recorded from subjects listening to speech in quiet and noise conditions at four signal-to-noise ratios (SNRs): +3, 0, -3, and -6 dB sound pressure level (SPL). Both groups of older listeners showed weaker midbrain response amplitudes and overrepresentation of cortical responses compared to younger listeners. No significant differences were found between the two older groups when the midbrain and cortical measurements were analyzed independently. However, significant differences between the older groups were found when investigating the midbrain-cortex relationships; that is, only hearing-impaired older adults showed significant correlations between midbrain and cortical measurements, suggesting that hearing loss may alter reciprocal connections between lower and higher levels of the auditory pathway. The overall paucity of differences in midbrain or cortical responses between the two older groups suggests that age-related temporal processing deficits may contribute to older adults' communication difficulties beyond what might be predicted from peripheral hearing loss alone; however, hearing loss does seem to alter the connectivity between midbrain and cortex. These results may have important ramifications for the field of audiology, as it indicates that algorithms in clinical devices, such as hearing aids, should consider age-related temporal processing deficits to maximize user benefit.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0213899PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6415857PMC
December 2019

Over-representation of speech in older adults originates from early response in higher order auditory cortex.

Acta Acust United Acust 2018 Sep-Oct;104(5):774-777

Institute for Systems Research, University of Maryland, College Park, Maryland.

Previous research has found that, paradoxically, while older adults have more difficulty comprehending speech in challenging circumstances than younger adults, their brain responses track the envelope of the acoustic signal more robustly. Here we investigate this puzzle by using magnetoencephalography (MEG) source localization to determine the anatomical origin of this difference. Our results indicate that this robust tracking in older adults does not arise merely from having the same responses as younger adults but with larger amplitudes; instead, they recruit additional regions, inferior to core auditory cortex, with a short latency of ~30 ms relative to the acoustic signal.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3813/AAA.919221DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343850PMC
January 2019

Closed Loop Microfabricated Facial Reanimation Device Coupling EMG-Driven Facial Nerve Stimulation with a Chronically Implanted Multichannel Cuff Electrode.

Annu Int Conf IEEE Eng Med Biol Soc 2018 Jul;2018:2206-2209

Permanent facial paralysis and paresis (FP) results from damage to the facial nerve (FN), and is a debilitating condition with substantial functional and psychological consequences for the patient. Unfortunately, surgeons have few tools with which they can satisfactorily reanimate the face. Current strategies employ static (e.g., implantation of nonmuscular material in the face to aid in function/cosmesis) and dynamic options (e.g., gracilis myoneurovascular free tissue transfer) to partially restore volitional facial function and cosmesis. Here, we propose a novel neuroprosthetic approach for facial reanimation that utilizes electromyographic (EMG) input coupled to a chronically implanted multichannel cuff electrode (MCE) to restore instantaneous, volitional, and selective hemifacial movement in a feline model. To accomplish this goal, we developed a single-channel EMG-drive current source coupled with a chronically implanted MCE via a portable microprocessor board. Our results demonstrated a successful feasibility trial in which human EMG input resulted in FN stimulation with subsequent concentric contraction of discrete regions of a feline face.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC.2018.8512778DOI Listing
July 2018

Tone-Evoked Acoustic Change Complex (ACC) Recorded in a Sedated Animal Model.

J Assoc Res Otolaryngol 2018 08 10;19(4):451-466. Epub 2018 May 10.

Department of Otolaryngology, University of California at Irvine, Irvine, CA, 92697-5310, USA.

The acoustic change complex (ACC) is a scalp-recorded cortical evoked potential complex generated in response to changes (e.g., frequency, amplitude) in an auditory stimulus. The ACC has been well studied in humans, but to our knowledge, no animal model has been evaluated. In particular, it was not known whether the ACC could be recorded under the conditions of sedation that likely would be necessary for recordings from animals. For that reason, we tested the feasibility of recording ACC from sedated cats in response to changes of frequency and amplitude of pure-tone stimuli. Cats were sedated with ketamine and acepromazine, and subdermal needle electrodes were used to record electroencephalographic (EEG) activity. Tones were presented from a small loudspeaker located near the right ear. Continuous tones alternated at 500-ms intervals between two frequencies or two levels. Neurometric functions were created by recording neural response amplitudes while systematically varying the magnitude of steps in frequency centered in octave frequency around 2, 4, 8, and 16 kHz, all at 75 dB SPL, or in decibel level around 75 dB SPL tested at 4 and 8 kHz. The ACC could be recorded readily under this ketamine/azepromazine sedation. In contrast, ACC could not be recorded reliably under any level of isoflurane anesthesia that was tested. The minimum frequency (expressed as Weber fractions (df/f)) or level steps (expressed in dB) needed to elicit ACC fell in the range of previous thresholds reported in animal psychophysical tests of discrimination. The success in recording ACC in sedated animals suggests that the ACC will be a useful tool for evaluation of other aspects of auditory acuity in normal hearing and, presumably, in electrical cochlear stimulation, especially for novel stimulation modes that are not yet feasible in humans.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10162-018-0673-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6081888PMC
August 2018

Neural source dynamics of brain responses to continuous stimuli: Speech processing from acoustics to comprehension.

Neuroimage 2018 05 3;172:162-174. Epub 2018 Feb 3.

Institute for Systems Research, University of Maryland, College Park, MD, USA; Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA; Department of Biology, University of Maryland, College Park, MD, USA.

Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2018.01.042DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5910254PMC
May 2018

Effects of Amplification on Neural Phase Locking, Amplitude, and Latency to a Speech Syllable.

Ear Hear 2018 Jul/Aug;39(4):810-824

Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, USA.

Objective: Older adults often have trouble adjusting to hearing aids when they start wearing them for the first time. Probe microphone measurements verify appropriate levels of amplification up to the tympanic membrane. Little is known, however, about the effects of amplification on auditory-evoked responses to speech stimuli during initial hearing aid use. The present study assesses the effects of amplification on neural encoding of a speech signal in older adults using hearing aids for the first time. It was hypothesized that amplification results in improved stimulus encoding (higher amplitudes, improved phase locking, and earlier latencies), with greater effects for the regions of the signal that are less audible.

Design: Thirty-seven adults, aged 60 to 85 years with mild to severe sensorineural hearing loss and no prior hearing aid use, were bilaterally fit with Widex Dream 440 receiver-in-the-ear hearing aids. Probe microphone measures were used to adjust the gain of the hearing aids and verify the fitting. Unaided and aided frequency-following responses and cortical auditory-evoked potentials to the stimulus /ga/ were recorded in sound field over the course of 2 days for three conditions: 65 dB SPL and 80 dB SPL in quiet, and 80 dB SPL in six-talker babble (+10 signal to noise ratio).

Results: Responses from midbrain were analyzed in the time regions corresponding to the consonant transition (18 to 68 ms) and the steady state vowel (68 to 170 ms). Generally, amplification increased phase locking and amplitude and decreased latency for the region and presentation conditions that had lower stimulus amplitudes-the transition region and 65 dB SPL level. Responses from cortex showed decreased latency for P1, but an unexpected decrease in N1 amplitude. Previous studies have demonstrated an exaggerated cortical representation of speech in older adults compared to younger adults, possibly because of an increase in neural resources necessary to encode the signal. Therefore, a decrease in N1 amplitude with amplification and with increased presentation level may suggest that amplification decreases the neural resources necessary for cortical encoding.

Conclusion: Increased phase locking and amplitude and decreased latency in midbrain suggest that amplification may improve neural representation of the speech signal in new hearing aid users. The improvement with amplification was also found in cortex, and, in particular, decreased P1 latencies and lower N1 amplitudes may indicate greater neural efficiency. Further investigations will evaluate changes in subcortical and cortical responses during the first 6 months of hearing aid use.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000000538DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6014864PMC
March 2019

Development of Phase Locking and Frequency Representation in the Infant Frequency-Following Response.

J Speech Lang Hear Res 2017 Aug 22:1-12. Epub 2017 Aug 22.

Department of Hearing and Speech Sciences, University of Maryland, College Park.

Purpose: This study investigates the development of phase locking and frequency representation in infants using the frequency-following response to consonant-vowel syllables.

Method: The frequency-following response was recorded in 56 infants and 15 young adults to 2 speech syllables (/ba/ and /ga/), which were presented in randomized order to the right ear. Signal-to-noise ratio and Fsp analyses were used to verify that individual responses were present above the noise floor. Thirty-six and 39 infants met these criteria for the /ba/ or /ga/ syllables, respectively, and 31 infants met the criteria for both syllables. Data were analyzed to obtain measures of phase-locking strength and spectral magnitudes.

Results: Phase-locking strength to the fine structure in the consonant-vowel transition was higher in young adults than in infants, but phase locking was equivalent at the fundamental frequency between infants and adults. However, frequency representation of the fundamental frequency was higher in older infants than in either the younger infants or adults.

Conclusion: Although spectral amplitudes changed during the first year of life, no changes were found with respect to phase locking to the stimulus envelope. These findings demonstrate the feasibility of obtaining these measures of phase locking and fundamental pitch strength in infants as young as 2 months of age.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1044/2017_JSLHR-H-16-0263DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5831628PMC
August 2017

Effects of Stimulus Duration on Event-Related Potentials Recorded From Cochlear-Implant Users.

Ear Hear 2017 Nov/Dec;38(6):e389-e393

1Department of Hearing and Speech Sciences, 2Neuroscience and Cognitive Science Program at the University of Maryland, College Park, Maryland, USA; 3The Bionics Institute, East Melbourne, Victoria, Australia; and 4Department of Medical Bionics, University of Melbourne, Melbourne, Australia.

Objectives: Several studies have investigated the feasibility of using electrophysiology as an objective tool to efficiently map cochlear implants. A pervasive problem when measuring event-related potentials is the need to remove the direct-current (DC) artifact produced by the cochlear implant. Here, we describe how DC artifact removal can corrupt the response waveform and how the appropriate choice of stimulus duration may minimize this corruption.

Design: Event-related potentials were recorded to a synthesized vowel /a/ with a 170- or 400-ms duration.

Results: The P2 response, which occurs between 150 and 250 ms, was corrupted by the DC artifact removal algorithm for a 170-ms stimulus duration but was relatively uncorrupted for a 400-ms stimulus duration.

Conclusions: To avoid response waveform corruption from DC artifact removal, one should choose a stimulus duration such that the offset of the stimulus does not temporally coincide with the specific peak of interest. While our data have been analyzed with only one specific algorithm, we argue that the length of the stimulus may be a critical factor for any DC artifact removal algorithm.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000000444DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5659925PMC
June 2018

Effect of informational content of noise on speech representation in the aging midbrain and cortex.

J Neurophysiol 2016 11 7;116(5):2356-2367. Epub 2016 Sep 7.

Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland.

The ability to understand speech is significantly degraded by aging, particularly in noisy environments. One way that older adults cope with this hearing difficulty is through the use of contextual cues. Several behavioral studies have shown that older adults are better at following a conversation when the target speech signal has high contextual content or when the background distractor is not meaningful. Specifically, older adults gain significant benefit in focusing on and understanding speech if the background is spoken by a talker in a language that is not comprehensible to them (i.e., a foreign language). To understand better the neural mechanisms underlying this benefit in older adults, we investigated aging effects on midbrain and cortical encoding of speech when in the presence of a single competing talker speaking in a language that is meaningful or meaningless to the listener (i.e., English vs. Dutch). Our results suggest that neural processing is strongly affected by the informational content of noise. Specifically, older listeners' cortical responses to the attended speech signal are less deteriorated when the competing speech signal is an incomprehensible language rather than when it is their native language. Conversely, temporal processing in the midbrain is affected by different backgrounds only during rapid changes in speech and only in younger listeners. Additionally, we found that cognitive decline is associated with an increase in cortical envelope tracking, suggesting an age-related over (or inefficient) use of cognitive resources that may explain their difficulty in processing speech targets while trying to ignore interfering noise.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00373.2016DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5110638PMC
November 2016

Evidence of degraded representation of speech in noise, in the aging midbrain and cortex.

J Neurophysiol 2016 11 17;116(5):2346-2355. Epub 2016 Aug 17.

Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland.

Humans have a remarkable ability to track and understand speech in unfavorable conditions, such as in background noise, but speech understanding in noise does deteriorate with age. Results from several studies have shown that in younger adults, low-frequency auditory cortical activity reliably synchronizes to the speech envelope, even when the background noise is considerably louder than the speech signal. However, cortical speech processing may be limited by age-related decreases in the precision of neural synchronization in the midbrain. To understand better the neural mechanisms contributing to impaired speech perception in older adults, we investigated how aging affects midbrain and cortical encoding of speech when presented in quiet and in the presence of a single-competing talker. Our results suggest that central auditory temporal processing deficits in older adults manifest in both the midbrain and in the cortex. Specifically, midbrain frequency following responses to a speech syllable are more degraded in noise in older adults than in younger adults. This suggests a failure of the midbrain auditory mechanisms needed to compensate for the presence of a competing talker. Similarly, in cortical responses, older adults show larger reductions than younger adults in their ability to encode the speech envelope when a competing talker is added. Interestingly, older adults showed an exaggerated cortical representation of speech in both quiet and noise conditions, suggesting a possible imbalance between inhibitory and excitatory processes, or diminished network connectivity that may impair their ability to encode speech efficiently.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00372.2016DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5110639PMC
November 2016

Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling.

Neuroimage 2016 Jan 4;124(Pt A):906-917. Epub 2015 Oct 4.

Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA; Institute for Systems Research, University of Maryland, College Park, MD 20742, USA. Electronic address:

The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2015.09.048DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4652844PMC
January 2016

Effects of Aging on the Encoding of Dynamic and Static Components of Speech.

Ear Hear 2015 Nov-Dec;36(6):e352-63

1Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, USA; and 2Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, USA.

Objectives: The authors investigated aging effects on the envelope of the frequency following response to dynamic and static components of speech. Older adults frequently experience problems understanding speech, despite having clinically normal hearing. Improving audibility with hearing aids provides variable benefit, as amplification cannot restore the temporal precision degraded by aging. Previous studies have demonstrated age-related delays in subcortical timing specific to the dynamic, transition region of the stimulus. However, it is unknown whether this delay is mainly due to a failure to encode rapid changes in the formant transition because of central temporal processing deficits or as a result of cochlear damage that reduces audibility for the high-frequency components of the speech syllable. To investigate the nature of this delay, the authors compared subcortical responses in younger and older adults with normal hearing to the speech syllables /da/ and /a/, hypothesizing that the delays in peak timing observed in older adults are mainly caused by temporal processing deficits in the central auditory system.

Design: The frequency following response was recorded to the speech syllables /da/ and /a/ from 15 younger and 15 older adults with normal hearing, normal IQ, and no history of neurological disorders. Both speech syllables were presented binaurally with alternating polarities at 80 dB SPL at a rate of 4.3 Hz through electromagnetically shielded insert earphones. A vertical montage of four Ag-AgCl electrodes (Cz, active, forehead ground, and earlobe references) was used.

Results: The responses of older adults were significantly delayed with respect to younger adults for the transition and onset regions of the /da/ syllable and for the onset of the /a/ syllable. However, in contrast with the younger adults who had earlier latencies for /da/ than for /a/ (as was expected given the high-frequency energy in the /da/ stop consonant burst), latencies in older adults were not significantly different between the responses to /da/ and /a/. An unexpected finding was noted in the amplitude and phase dissimilarities between the two groups in the later part of the steady-state region, rather than in the transition region. This amplitude reduction may indicate prolonged neural recovery or response decay associated with a loss of auditory nerve fibers.

Conclusions: These results suggest that older adults' peak timing delays may arise from decreased synchronization to the onset of the stimulus due to reduced audibility, though the possible role of impaired central auditory processing cannot be ruled out. Conversely, a deterioration in temporal processing mechanisms in the auditory nerve, brainstem, or midbrain may be a factor in the sudden loss of synchronization in the later part of the steady-state response in older adults.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000000193DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4839261PMC
August 2016

Decoding intra-limb and inter-limb kinematics during treadmill walking from scalp electroencephalographic (EEG) signals.

IEEE Trans Neural Syst Rehabil Eng 2012 Mar;20(2):212-9

Department of Kinesiology, University of Maryland, College Park, MD 20742, USA.

Brain-machine interface (BMI) research has largely been focused on the upper limb. Although restoration of gait function has been a long-standing focus of rehabilitation research, surprisingly very little has been done to decode the cortical neural networks involved in the guidance and control of bipedal locomotion. A notable exception is the work by Nicolelis' group at Duke University that decoded gait kinematics from chronic recordings from ensembles of neurons in primary sensorimotor areas in rhesus monkeys. Recently, we showed that gait kinematics from the ankle, knee, and hip joints during human treadmill walking can be inferred from the electroencephalogram (EEG) with decoding accuracies comparable to those using intracortical recordings. Here we show that both intra- and inter-limb kinematics from human treadmill walking can be achieved with high accuracy from as few as 12 electrodes using scalp EEG. Interestingly, forward and backward predictors from EEG signals lagging or leading the kinematics, respectively, showed different spatial distributions suggesting distinct neural networks for feedforward and feedback control of gait. Of interest is that average decoding accuracy across subjects and decoding modes was ~0.68±0.08, supporting the feasibility of EEG-based BMI systems for restoration of walking in patients with paralysis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNSRE.2012.2188304DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3355189PMC
March 2012

Restoration of whole body movement: toward a noninvasive brain-machine interface system.

IEEE Pulse 2012 Jan;3(1):34-7

Department of Electrical and Computer Engineering, University of Houston, Texas, USA.

This article highlights recent advances in the design of noninvasive neural interfaces based on the scalp electroencephalogram (EEG). The simplest of physical tasks, such as turning the page to read this article, requires an intense burst of brain activity. It happens in milliseconds and requires little conscious thought. But for amputees and stroke victims with diminished motor-sensory skills, this process can be difficult or impossible. Our team at the University of Maryland, in conjunction with the Johns Hopkins Applied Physics Laboratory (APL) and the University of Maryland School of Medicine, hopes to offer these people newfound mobility and dexterity. In separate research thrusts, were using data gleaned from scalp EEG to develop reliable brainmachine interface (BMI) systems that could soon control modern devices such as prosthetic limbs or powered robotic exoskeletons.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/MPUL.2011.2175635DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3357625PMC
January 2012

Towards a non-invasive brain-machine interface system to restore gait function in humans.

Annu Int Conf IEEE Eng Med Biol Soc 2011 ;2011:4588-91

Department of Kinesiology, University of Maryland, College Park, MD 20742, USA.

Before 2009, the feasibility of applying brain-machine interfaces (BMIs) to control prosthetic devices had been limited to upper limb prosthetics such as the DARPA modular prosthetic limb. Until recently, it was believed that the control of bipedal locomotion involved central pattern generators with little supraspinal control. Analysis of cortical dynamics with electroencephalography (EEG) was also prevented by the lack of analysis tools to deal with excessive signal artifacts associated with walking. Recently, Nicolelis and colleagues paved the way for the decoding of locomotion showing that chronic recordings from ensembles of cortical neurons in primary motor (M1) and primary somatosensory (S1) cortices can be used to decode bipedal kinematics in rhesus monkeys. However, neural decoding of bipedal locomotion in humans has not yet been demonstrated. This study uses non-invasive EEG signals to decode human walking in six nondisabled adults. Participants were asked to walk on a treadmill at their self-selected comfortable speed while receiving visual feedback of their lower limbs, to repeatedly avoid stepping on a strip drawn on the treadmill belt. Angular kinematics of the left and right hip, knee and ankle joints and EEG were recorded concurrently. Our results support the possibility of decoding human bipedal locomotion with EEG. The average of the correlation values (r) between predicted and recorded kinematics for the six subjects was 0.7 (± 0.12) for the right leg and 0.66 (± 0.11) for the left leg. The average signal-to-noise ratio (SNR) values for the predicted parameters were 3.36 (± 1.89) dB for the right leg and 2.79 (± 1.33) dB for the left leg. These results show the feasibility of developing non-invasive neural interfaces for volitional control of devices aimed at restoring human gait function.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/IEMBS.2011.6091136DOI Listing
July 2012

Neural decoding of treadmill walking from noninvasive electroencephalographic signals.

J Neurophysiol 2011 Oct 13;106(4):1875-87. Epub 2011 Jul 13.

Neural Engineering and Smart Prosthetics Research Laboratory, Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD 20742, USA.

Chronic recordings from ensembles of cortical neurons in primary motor and somatosensory areas in rhesus macaques provide accurate information about bipedal locomotion (Fitzsimmons NA, Lebedev MA, Peikon ID, Nicolelis MA. Front Integr Neurosci 3: 3, 2009). Here we show that the linear and angular kinematics of the ankle, knee, and hip joints during both normal and precision (attentive) human treadmill walking can be inferred from noninvasive scalp electroencephalography (EEG) with decoding accuracies comparable to those from neural decoders based on multiple single-unit activities (SUAs) recorded in nonhuman primates. Six healthy adults were recorded. Participants were asked to walk on a treadmill at their self-selected comfortable speed while receiving visual feedback of their lower limbs (i.e., precision walking), to repeatedly avoid stepping on a strip drawn on the treadmill belt. Angular and linear kinematics of the left and right hip, knee, and ankle joints and EEG were recorded, and neural decoders were designed and optimized with cross-validation procedures. Of note, the optimal set of electrodes of these decoders were also used to accurately infer gait trajectories in a normal walking task that did not require subjects to control and monitor their foot placement. Our results indicate a high involvement of a fronto-posterior cortical network in the control of both precision and normal walking and suggest that EEG signals can be used to study in real time the cortical dynamics of walking and to develop brain-machine interfaces aimed at restoring human gait function.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00104.2011DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3296428PMC
October 2011

Multi-limb acquisition of motor evoked potentials and its application in spinal cord injury.

J Neurosci Methods 2010 Nov 9;193(2):210-6. Epub 2010 Sep 9.

Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA.

The motor evoked potential (MEP) is an electrical response of peripheral neuro-muscular pathways to stimulation of the motor cortex. MEPs provide objective assessment of electrical conduction through the associated neural pathways, and therefore detect disruption due to a nervous system injury such as spinal cord injury (SCI). In our studies of SCI, we developed a novel, multi-channel set-up for MEP acquisition in rat models. Unlike existing electrophysiological systems for SCI assessment, the set-up allows for multi-channel MEP acquisition from all limbs of rats and enables longitudinal monitoring of injury and treatment for in vivo models of experimental SCI. The article describes the development of the set-up and discusses its capabilities to acquire MEPs in rat models of SCI. We demonstrate its use for MEP acquisition under two types of anesthesia as well as a range of cortical stimulation parameters, identifying parameters yielding consistent and reliable MEPs. To validate our set-up, MEPs were recorded from a group of 10 rats before and after contusive SCI. Upon contusion with moderate severity (12.5mm impact height), MEP amplitude decreased by 91.36±6.03%. A corresponding decline of 93.8±11.4% was seen in the motor behavioral score (BBB), a gold standard in rodent models of SCI.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jneumeth.2010.08.017DOI Listing
November 2010

Auditory steady-state responses to 40-Hz click trains: relationship to middle latency, gamma band and beta band responses studied with deconvolution.

Clin Neurophysiol 2010 Sep 21;121(9):1540-1550. Epub 2010 Apr 21.

Department of Biomedical Engineering, University of Miami, Coral Gables, FL 33146, USA; Department of Otolaryngology, Pediatrics and Neuroscience Program (Graduate), Miller School of Medicine, University of Miami, Miami, FL 33146, USA. Electronic address:

Objective: The nature of the auditory steady-state responses (ASSR) evoked with 40-Hz click trains and their relationship to auditory brainstem and middle latency responses (ABR/MLR), gamma band responses (GBR) and beta band responses (BBR) were investigated using superposition theory. Transient responses obtained by continuous loop averaging deconvolution (CLAD) and last click responses (LCR) were used to synthesize ASSRs and GBRs.

Methods: ASSRs were obtained with trains of low jittered 40 Hz clicks presented monaurally and deconvolved using a modified CLAD. Resulting transient responses and modified LCRs were used to predict the ASSRs and the GBR.

Results: The ABR/MLR obtained with deconvolution predicted accurately the steady portion of the ASSR but failed to predict its onset portion. The modified LCR failed to fully predict both portions. The GBRs were predicted by narrow band filtering of the ASSRs. Significant BBR activity was found both in the ASSRs and deconvolved ABR/MLRs.

Conclusions: Simulations using deconvolved ABR/MLRs obtained at 40 Hz predict fully the steady state but not the onset portion of the ASSRs, thus confirming the superposition theory.

Significance: Click rate adaptation plays a significant role in ASSR generation with click trains and should be considered in evaluating convolved response generation theories.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2010.03.020DOI Listing
September 2010
-->