Publications by authors named "Andrew Dimitrijevic"

31 Publications

Local magnetic delivery of adeno-associated virus AAV2(quad Y-F)-mediated BDNF gene therapy restores hearing after noise injury.

Mol Ther 2021 Jul 21. Epub 2021 Jul 21.

Biological Sciences Platform, Hurvitz Brain Sciences Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada; Department of Otolaryngology Head & Neck Surgery, Faculty of Medicine, University of Toronto, ON M5S 1A1, Canada. Electronic address:

Moderate noise exposure may cause acute loss of cochlear synapses without affecting the cochlear hair cells and hearing threshold; thus, it remains "hidden" to standard clinical tests. This cochlear synaptopathy is one of the main pathologies of noise-induced hearing loss (NIHL). There is no effective treatment for NIHL, mainly because of the lack of a proper drug-delivery technique. We hypothesized that local magnetic delivery of gene therapy into the inner ear could be beneficial for NIHL. In this study, we used superparamagnetic iron oxide nanoparticles (SPIONs) and a recombinant adeno-associated virus (AAV) vector (AAV2(quad Y-F)) to deliver brain-derived neurotrophic factor (BDNF) gene therapy into the rat inner ear via minimally invasive magnetic targeting. We found that the magnetic targeting effectively accumulates and distributes the SPION-tagged AAV2(quad Y-F)-BDNF vector into the inner ear. We also found that AAV2(quad Y-F) efficiently transfects cochlear hair cells and enhances BDNF gene expression. Enhanced BDNF gene expression substantially recovers noise-induced BDNF gene downregulation, auditory brainstem response (ABR) wave I amplitude reduction, and synapse loss. These results suggest that magnetic targeting of AAV2(quad Y-F)-mediated BDNF gene therapy could reverse cochlear synaptopathy after NIHL.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ymthe.2021.07.013DOI Listing
July 2021

Cortical alpha oscillations in cochlear implant users reflect subjective listening effort during speech-in-noise perception.

PLoS One 2021 9;16(7):e0254162. Epub 2021 Jul 9.

Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.

Listening to speech in noise is effortful for individuals with hearing loss, even if they have received a hearing prosthesis such as a hearing aid or cochlear implant (CI). At present, little is known about the neural functions that support listening effort. One form of neural activity that has been suggested to reflect listening effort is the power of 8-12 Hz (alpha) oscillations measured by electroencephalography (EEG). Alpha power in two cortical regions has been associated with effortful listening-left inferior frontal gyrus (IFG), and parietal cortex-but these relationships have not been examined in the same listeners. Further, there are few studies available investigating neural correlates of effort in the individuals with cochlear implants. Here we tested 16 CI users in a novel effort-focused speech-in-noise listening paradigm, and confirm a relationship between alpha power and self-reported effort ratings in parietal regions, but not left IFG. The parietal relationship was not linear but quadratic, with alpha power comparatively lower when effort ratings were at the top and bottom of the effort scale, and higher when effort ratings were in the middle of the scale. Results are discussed in terms of cognitive systems that are engaged in difficult listening situations, and the implication for clinical translation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0254162PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8270138PMC
July 2021

Neural correlates of visual stimulus encoding and verbal working memory differ between cochlear implant users and normal-hearing controls.

Eur J Neurosci 2021 08 9;54(3):5016-5037. Epub 2021 Jul 9.

Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.

A common concern for individuals with severe-to-profound hearing loss fitted with cochlear implants (CIs) is difficulty following conversations in noisy environments. Recent work has suggested that these difficulties are related to individual differences in brain function, including verbal working memory and the degree of cross-modal reorganization of auditory areas for visual processing. However, the neural basis for these relationships is not fully understood. Here, we investigated neural correlates of visual verbal working memory and sensory plasticity in 14 CI users and age-matched normal-hearing (NH) controls. While we recorded the high-density electroencephalogram (EEG), participants completed a modified Sternberg visual working memory task where sets of letters and numbers were presented visually and then recalled at a later time. Results suggested that CI users had comparable behavioural working memory performance compared with NH. However, CI users had more pronounced neural activity during visual stimulus encoding, including stronger visual-evoked activity in auditory and visual cortices, larger modulations of neural oscillations and increased frontotemporal connectivity. In contrast, during memory retention of the characters, CI users had descriptively weaker neural oscillations and significantly lower frontotemporal connectivity. We interpret the differences in neural correlates of visual stimulus processing in CI users through the lens of cross-modal and intramodal plasticity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/ejn.15365DOI Listing
August 2021

Poor early cortical differentiation of speech predicts perceptual difficulties of severely hearing-impaired listeners in multi-talker environments.

Sci Rep 2020 04 9;10(1):6141. Epub 2020 Apr 9.

Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, ON, M4N 3M5, Canada.

Hearing impairment disrupts processes of selective attention that help listeners attend to one sound source over competing sounds in the environment. Hearing prostheses (hearing aids and cochlear implants, CIs), do not fully remedy these issues. In normal hearing, mechanisms of selective attention arise through the facilitation and suppression of neural activity that represents sound sources. However, it is unclear how hearing impairment affects these neural processes, which is key to understanding why listening difficulty remains. Here, severely-impaired listeners treated with a CI, and age-matched normal-hearing controls, attended to one of two identical but spatially separated talkers while multichannel EEG was recorded. Whereas neural representations of attended and ignored speech were differentiated at early (~ 150 ms) cortical processing stages in controls, differentiation of talker representations only occurred later (~250 ms) in CI users. CI users, but not controls, also showed evidence for spatial suppression of the ignored talker through lateralized alpha (7-14 Hz) oscillations. However, CI users' perceptual performance was only predicted by early-stage talker differentiation. We conclude that multi-talker listening difficulty remains for impaired listeners due to deficits in early-stage separation of cortical speech representations, despite neural evidence that they use spatial information to guide selective attention.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-63103-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7145807PMC
April 2020

Acoustic Change Responses to Amplitude Modulation in Cochlear Implant Users: Relationships to Speech Perception.

Front Neurosci 2020 18;14:124. Epub 2020 Feb 18.

Communication Sciences Research Center, Cincinnati Childs Hospital Medical Center, Cincinnati, OH, United States.

Objectives: The ability to understand speech is highly variable in people with cochlear implants (CIs) and to date, there are no objective measures that identify the root of this discrepancy. However, behavioral measures of temporal processing such as the temporal modulation transfer function (TMTF) has previously found to be related to vowel and consonant identification in CI users. The acoustic change complex (ACC) is a cortical auditory-evoked potential response that can be elicited by a "change" in an ongoing stimulus. In this study, the ACC elicited by amplitude modulation (AM) change was related to measures of speech perception as well as the amplitude detection threshold in CI users.

Methods: Ten CI users (mean age: 50 years old) participated in this study. All subjects participated in behavioral tests that included both speech and amplitude modulation detection to obtain a TMTF. CI users were categorized as "good" ( = 6) or "poor" ( = 4) based on their speech-in noise score (<50%). 64-channel electroencephalographic recordings were conducted while CI users passively listened to AM change sounds that were presented in a free field setting. The AM change stimulus was white noise with four different AM rates (4, 40, 100, and 300 Hz).

Results: Behavioral results show that AM detection thresholds in CI users were higher compared to the normal-hearing (NH) group for all AM rates. The electrophysiological data suggest that N1 responses were significantly decreased in amplitude and their latencies were increased in CI users compared to NH controls. In addition, the N1 latencies for the poor CI performers were delayed compared to the good CI performers. The N1 latency for 40 Hz AM was correlated with various speech perception measures.

Conclusion: Our data suggest that the ACC to AM change provides an objective index of speech perception abilities that can be used to explain some of the variation in speech perception observed among CI users.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2020.00124DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7040081PMC
February 2020

Neural indices of listening effort in noisy environments.

Sci Rep 2019 08 2;9(1):11278. Epub 2019 Aug 2.

Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, USA.

Listening in a noisy environment is challenging for individuals with normal hearing and can be a significant burden for those with hearing impairment. The extent to which this burden is alleviated by a hearing device is a major, unresolved issue for rehabilitation. Here, we found adult users of cochlear implants (CIs) self-reported listening effort during a speech-in-noise task that was positively related to alpha oscillatory activity in the left inferior frontal cortex, canonical Broca's area, and inversely related to speech envelope coherence in the 2-5 Hz range originating in the superior-temporal plane encompassing auditory cortex. Left frontal cortex coherence in the 2-5 Hz range also predicted speech-in-noise identification. These data demonstrate that neural oscillations predict both speech perception ability in noise and listening effort.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-019-47643-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6677804PMC
August 2019

Cortical Alpha Oscillations Predict Speech Intelligibility.

Front Hum Neurosci 2017 24;11:88. Epub 2017 Feb 24.

Communication Sciences Research Center, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Department of Otolaryngology, University of CincinnatiCincinnati, OH, USA.

Understanding speech in noise (SiN) is a complex task involving sensory encoding and cognitive resources including working memory and attention. Previous work has shown that brain oscillations, particularly alpha rhythms (8-12 Hz) play important roles in sensory processes involving working memory and attention. However, no previous study has examined brain oscillations during performance of a continuous speech perception test. The aim of this study was to measure cortical alpha during attentive listening in a commonly used SiN task (digits-in-noise, DiN) to better understand the neural processes associated with "top-down" cognitive processing in adverse listening environments. We recruited 14 normal hearing (NH) young adults. DiN speech reception threshold (SRT) was measured in an initial behavioral experiment. EEG activity was then collected: (i) while performing the DiN near SRT; and (ii) while attending to a silent, close-caption video during presentation of identical digit stimuli that the participant was instructed to ignore. Three main results were obtained: (1) during attentive ("active") listening to the DiN, a number of distinct neural oscillations were observed (mainly alpha with some beta; 15-30 Hz). No oscillations were observed during attention to the video ("passive" listening); (2) overall, alpha event-related synchronization (ERS) of central/parietal sources were observed during active listening when data were grand averaged across all participants. In some participants, a smaller magnitude alpha event-related desynchronization (ERD), originating in temporal regions, was observed; and (3) when individual EEG trials were sorted according to correct and incorrect digit identification, the temporal alpha ERD was consistently greater on correctly identified trials. No such consistency was observed with the central/parietal alpha ERS. These data demonstrate that changes in alpha activity are specific to listening conditions. To our knowledge, this is the first report that shows almost no brain oscillatory changes during a passive task compared to an active task in any sensory modality. Temporal alpha ERD was related to correct digit identification.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2017.00088DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5323373PMC
February 2017

Human Envelope Following Responses to Amplitude Modulation: Effects of Aging and Modulation Depth.

Ear Hear 2016 Sep-Oct;37(5):e322-35

1Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA; currently at Department of Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, University of Toronto, 2075 Bayview Avenue, Toronto, Ontario, M4N 3M5, Canada; 2Department of Otolaryngology, Biomedical Engineering and Cognitive Sciences, University of California, Irvine, California, USA; 3Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada; and 4National Centre for Audiology, Western University, London, Ontario, Canada; School of Communication Sciences and Disorders, Western University, London, Ontario, Canada.

Objective: To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects.

Design:

Participants: three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group ("O1"; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group ("O2"; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s). EFRs were analyzed as a function of the AM depth. In condition 2, auditory steady-state responses were recorded to fixed AM depths (100%, 75%, 50%, and 25%) at a rate of 41 Hz. Psychophysics: A 3 AFC (alternative forced choice) procedure was used to track the AM depth needed to detect AM at 41 Hz (AM detection). The minimum AM depth capable of eliciting a statistically detectable EFR was defined as the physiological AM detection threshold.

Results: Across all ages, the fixed AM depth auditory steady-state response and swept AM EFR yielded similar response amplitudes. Statistically significant correlations (r = 0.48) were observed between behavioral and physiological AM detection thresholds. Older subjects had slightly higher (not significant) behavioral AM detection thresholds than younger subjects. AM detection thresholds did not correlate with age. All groups showed a sigmoidal EFR amplitude versus AM depth function but the shape of the function differed across groups. The O2 group reached EFR amplitude plateau levels at lower modulation depths than the normal-hearing group and had a narrower neural dynamic range. In the young normal-hearing group, the EFR phase did not differ with AM depth, whereas in the older group, EFR phase showed a consistent decrease with increasing AM depth. The degree of phase change (or phase slope) was significantly correlated to the pure-tone threshold at 4 kHz.

Conclusions: EFRs can be recorded using either the swept modulation depth or the discrete AM depth techniques. Sweep recordings may provide additional valuable information at suprathreshold intensities including the plateau level, slope, and dynamic range. Older subjects had a reduced neural dynamic range compared with younger subjects suggesting that aging affects the ability of the auditory system to encode subtle differences in the depth of AM. The phase-slope differences are likely related to differences in low and high-frequency contributions to EFR. The behavioral-physiological AM depth threshold relationship was significant but likely too weak to be clinically useful in the present individual subjects who did not suffer from apparent temporal processing deficits.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000000324DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5031488PMC
January 2018

Auditory cortical activity to different voice onset times in cochlear implant users.

Clin Neurophysiol 2016 Feb 10;127(2):1603-1617. Epub 2015 Nov 10.

Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Otolaryngology, College of Medicine, University of Cincinnati, Cincinnati, OH, USA. Electronic address:

Objective: Voice onset time (VOT) is a critical temporal cue for perception of speech in cochlear implant (CI) users. We assessed the cortical auditory evoked potentials (CAEPs) to consonant vowels (CVs) with varying VOTs and related these potentials to various speech perception measures.

Methods: CAEPs were recorded from 64 scalp electrodes during passive listening in CI and normal-hearing (NH) groups. Speech stimuli were synthesized CVs from a 6-step VOT /ba/-/pa/ continuum ranging from 0 to 50 ms VOT in 10-ms steps. Behavioral measures included the 50% boundary point for categorical perception ("ba" to "pa") from an active condition task.

Results: Behavioral measures: CI users with poor speech perception performance had prolonged 50% VOT boundary points compared to NH subjects. The 50% boundary point was also significantly correlated to the ability to discriminate consonants in quiet and noise masking. Electrophysiology: The most striking difference between the NH and CI subjects was that the P2 response was significantly reduced in amplitude in the CI group compared to NH. N1 amplitude did not differ between NH and CI groups. P2 latency increased with increases in VOT for both NH and CI groups. P2 was delayed more in CI users with poor speech perception compared to NH subjects. N1 amplitude was significantly related to consonant perception in noise while P2 latency was significantly related to vowel perception in noise. When dipole source modelling in auditory cortex was used to characterize N1/P2, more significant relationships were observed with speech perception measures compared to the same N1/P2 activity when measured at the scalp. N1 dipole amplitude measures were significantly correlated with consonants in noise discrimination. Like N1, the P2 dipole amplitude was correlated with consonant discrimination, but additional significant relationships were observed such as sentence and word identification.

Conclusions: P2 responses to a VOT continuum stimulus were different between NH subjects and CI users. P2 responses show more significant relationships with speech perception than N1 responses.

Significance: The current findings indicate that N1/P2 measures during a passive listening task relate to speech perception outcomes after cochlear implantation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2015.10.049DOI Listing
February 2016

Characterizing Information Flux Within the Distributed Pediatric Expressive Language Network: A Core Region Mapped Through fMRI-Constrained MEG Effective Connectivity Analyses.

Brain Connect 2016 Feb 2;6(1):76-83. Epub 2015 Dec 2.

1 Pediatric Neuroimaging Research Consortium, Cincinnati Children's Hospital Medical Center , Cincinnati, Ohio.

Using noninvasive neuroimaging, researchers have shown that young children have bilateral and diffuse language networks, which become increasingly left lateralized and focal with development. Connectivity within the distributed pediatric language network has been minimally studied, and conventional neuroimaging approaches do not distinguish task-related signal changes from those that are task essential. In this study, we propose a novel multimodal method to map core language sites from patterns of information flux. We retrospectively analyze neuroimaging data collected in two groups of children, ages 5-18 years, performing verb generation in functional magnetic resonance imaging (fMRI) (n = 343) and magnetoencephalography (MEG) (n = 21). The fMRI data were conventionally analyzed and the group activation map parcellated to define node locations. Neuronal activity at each node was estimated from MEG data using a linearly constrained minimum variance beamformer, and effective connectivity within canonical frequency bands was computed using the phase slope index metric. We observed significant (p ≤ 0.05) effective connections in all subjects. The number of suprathreshold connections was significantly and linearly correlated with participant's age (r = 0.50, n = 21, p ≤ 0.05), suggesting that core language sites emerge as part of the normal developmental trajectory. Across frequencies, we observed significant effective connectivity among proximal left frontal nodes. Within the low frequency bands, information flux was rostrally directed within a focal, left frontal region, approximating Broca's area. At higher frequencies, we observed increased connectivity involving bilateral perisylvian nodes. Frequency-specific differences in patterns of information flux were resolved through fast (i.e., MEG) neuroimaging.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1089/brain.2015.0374DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4744880PMC
February 2016

Acoustic change responses to amplitude modulation: a method to quantify cortical temporal processing and hemispheric asymmetry.

Front Neurosci 2015 11;9:38. Epub 2015 Feb 11.

Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center Cincinnati, OH, USA.

Objective: Sound modulation is a critical temporal cue for the perception of speech and environmental sounds. To examine auditory cortical responses to sound modulation, we developed an acoustic change stimulus involving amplitude modulation (AM) of ongoing noise. The AM transitions in this stimulus evoked an acoustic change complex (ACC) that was examined parametrically in terms of rate and depth of modulation and hemispheric symmetry.

Methods: Auditory cortical potentials were recorded from 64 scalp electrodes during passive listening in two conditions: (1) ACC from white noise to 4, 40, 300 Hz AM, with varying AM depths of 100, 50, 25% lasting 1 s and (2) 1 s AM noise bursts at the same modulation rate. Behavioral measures included AM detection from an attend ACC condition and AM depth thresholds (i.e., a temporal modulation transfer function, TMTF).

Results: The N1 response of the ACC was large to 4 and 40 Hz and small to the 300 Hz AM. In contrast, the opposite pattern was observed with bursts of AM showing larger responses with increases in AM rate. Brain source modeling showed significant hemispheric asymmetry such that 4 and 40 Hz ACC responses were dominated by right and left hemispheres respectively.

Conclusion: N1 responses to the ACC resembled a low pass filter shape similar to a behavioral TMTF. In the ACC paradigm, the only stimulus parameter that changes is AM and therefore the N1 response provides an index for this AM change. In contrast, an AM burst stimulus contains both AM and level changes and is likely dominated by the rise time of the stimulus. The hemispheric differences are consistent with the asymmetric sampling in time hypothesis suggesting that the different hemispheres preferentially sample acoustic time across different time windows.

Significance: The ACC provides a novel approach to studying temporal processing at the level of cortex and provides further evidence of hemispheric specialization for fast and slow stimuli.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2015.00038DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4324071PMC
February 2015

Region-specific modulations in oscillatory alpha activity serve to facilitate processing in the visual and auditory modalities.

Neuroimage 2014 Feb 2;87:356-62. Epub 2013 Nov 2.

Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging.

There have been a number of studies suggesting that oscillatory alpha activity (~10 Hz) plays a pivotal role in attention by gating information flow to relevant sensory regions. The vast majority of these studies have looked at shifts of attention in the spatial domain and only in a single modality (often visual or sensorimotor). In the current magnetoencephalography (MEG) study, we investigated the role of alpha activity in the suppression of a distracting modality stream. We used a cross-modal attention task where visual cues indicated whether participants had to judge a visual orientation or discriminate the auditory pitch of an upcoming target. The visual and auditory targets were presented either simultaneously or alone, allowing us to behaviorally gauge the "cost" of having a distractor present in each modality. We found that the preparation for visual discrimination (relative to pitch discrimination) resulted in a decrease of alpha power (9-11 Hz) in the early visual cortex, with a concomitant increase in alpha/beta power (14-16 Hz) in the supramarginal gyrus, a region suggested to play a vital role in short-term storage of pitch information (Gaab et al., 2003). On a trial-by-trial basis, alpha power over the visual areas was significantly correlated with increased visual discrimination times, whereas alpha power over the precuneus and right superior temporal gyrus was correlated with increased auditory discrimination times. However, these correlations were only significant when the targets were paired with distractors. Our work adds to increasing evidence that the top-down (i.e. attentional) modulation of alpha activity is a mechanism by which stimulus processing can be gated within the cortex. Here, we find that this phenomenon is not restricted to the domain of spatial attention and can be generalized to other sensory modalities than vision.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2013.10.052DOI Listing
February 2014

Loudness adaptation accompanying ribbon synapse and auditory nerve disorders.

Brain 2013 May 15;136(Pt 5):1626-38. Epub 2013 Mar 15.

Department of Biomedical Engineering, University of California, Irvine, CA 92697, USA.

Abnormal auditory adaptation is a standard clinical tool for diagnosing auditory nerve disorders due to acoustic neuromas. In the present study we investigated auditory adaptation in auditory neuropathy owing to disordered function of inner hair cell ribbon synapses (temperature-sensitive auditory neuropathy) or auditory nerve fibres. Subjects were tested when afebrile for (i) psychophysical loudness adaptation to comfortably-loud sustained tones; and (ii) physiological adaptation of auditory brainstem responses to clicks as a function of their position in brief 20-click stimulus trains (#1, 2, 3 … 20). Results were compared with normal hearing listeners and other forms of hearing impairment. Subjects with ribbon synapse disorder had abnormally increased magnitude of loudness adaptation to both low (250 Hz) and high (8000 Hz) frequency tones. Subjects with auditory nerve disorders had normal loudness adaptation to low frequency tones; all but one had abnormal adaptation to high frequency tones. Adaptation was both more rapid and of greater magnitude in ribbon synapse than in auditory nerve disorders. Auditory brainstem response measures of adaptation in ribbon synapse disorder showed Wave V to the first click in the train to be abnormal both in latency and amplitude, and these abnormalities increased in magnitude or Wave V was absent to subsequent clicks. In contrast, auditory brainstem responses in four of the five subjects with neural disorders were absent to every click in the train. The fifth subject had normal latency and abnormally reduced amplitude of Wave V to the first click and abnormal or absent responses to subsequent clicks. Thus, dysfunction of both synaptic transmission and auditory neural function can be associated with abnormal loudness adaptation and the magnitude of the adaptation is significantly greater with ribbon synapse than neural disorders.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/brain/awt056DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3634197PMC
May 2013

Auditory cortical activity in normal hearing subjects to consonant vowels presented in quiet and in noise.

Clin Neurophysiol 2013 Jun 29;124(6):1204-15. Epub 2012 Dec 29.

Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Department of Otolaryngology, Head and Neck Surgery, University of Cincinnati, USA.

Objective: Compare brain potentials to consonant vowels (CVs) as a function of both voice onset times (VOTs) and consonant position; initial (CV) versus second (VCV).

Methods: Auditory cortical potentials (N100, P200, N200, and a late slow negativity, (SN) were recorded from scalp electrodes in twelve normal hearing subjects to consonant vowels in initial position (CVs: /du/ and /tu/), in second position (VCVs: /udu/ and /utu/), and to vowels alone (V: /u/) and paired (VVs: /uu/) separated in time to simulate consonant voice onset times (VOTs).

Results: CVs evoked "acoustic onset" N100s of similar latency but larger amplitudes to /du/ than /tu/. CVs preceded by a vowel (VCVs) evoked "acoustic change" N100s with longer latencies to /utu/ than /udu/. Their absolute latency difference was less than the corresponding VOT difference. The SN following N100 to VCVs was larger to /utu/ than /udu/. Paired vowels (/uu/) separated by intervals corresponding to consonant VOTs evoked N100s with latency differences equal to the simulated VOT differences and SNs of similar amplitudes. Noise masking resulted in VCV N100 latency differences that were now equal to consonant VOT differences. Brain activations by CVs, VCVs, and VVs were maximal in right temporal lobe.

Conclusion: Auditory cortical activities to CVs are sensitive to: (1) position of the CV in the utterance; (2) VOTs of consonants; and (3) noise masking.

Significance: VOTs of stop consonants affect auditory cortical activities differently as a function of the position of the consonant in the utterance.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2012.11.014DOI Listing
June 2013

Towards a closed-loop cochlear implant system: application of embedded monitoring of peripheral and central neural activity.

IEEE Trans Neural Syst Rehabil Eng 2012 Jul 6;20(4):443-54. Epub 2012 Feb 6.

Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, CA 92697, USA.

Although the cochlear implant (CI) is widely considered the most successful neural prosthesis, it is essentially an open-loop system that requires extensive initial fitting and frequent tuning to maintain a high, but not necessarily optimal, level of performance. Two developments in neuroscience and neuroengineering now make it feasible to design a closed-loop CI. One development is the recording and interpretation of evoked potentials (EPs) from the peripheral to the central nervous system. The other is the embedded hardware and software of a modern CI that allows recording of EPs. We review EPs that are pertinent to behavioral functions from simple signal detection and loudness growth to speech discrimination and recognition. We also describe signal processing algorithms used for electric artifact reduction and cancellation, critical to the recording of electric EPs. We then present a conceptual design for a closed-loop CI that utilizes in an innovative way the embedded implant receiver and stimulators to record short latency compound action potentials ( ~1 ms), auditory brainstem responses (1-10 ms) and mid-to-late cortical potentials (20-300 ms). We compare EPs recorded using the CI to EPs obtained using standard scalp electrodes recording techniques. Future applications and capabilities are discussed in terms of the development of a new generation of closed-loop CIs and other neural prostheses.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNSRE.2012.2186982DOI Listing
July 2012

Tinnitus suppression by low-rate electric stimulation and its electrophysiological mechanisms.

Hear Res 2011 Jul 5;277(1-2):61-6. Epub 2011 Apr 5.

Department of Otolaryngology-Head and Neck Surgery, 110 Medical Science E, University of California, Irvine, CA 92697-5320, USA.

Tinnitus is a phantom sensation of sound in the absence of external stimulation. However, external stimulation, particularly electric stimulation via a cochlear implant, has been shown to suppress tinnitus. Different from traditional methods of delivering speech sounds or high-rate (>2000 Hz) stimulation, the present study found a unique unilaterally-deafened cochlear implant subject whose tinnitus was completely suppressed by a low-rate (<100 Hz) stimulus, delivered at a level softer than tinnitus to the apical part of the cochlea. Taking advantage of this novel finding, the present study compared both event-related and spontaneous cortical activities in the same subject between the tinnitus-present and tinnitus-suppressed states. Compared with the results obtained in the tinnitus-present state, the low-rate stimulus reduced cortical N100 potentials while increasing the spontaneous alpha power in the auditory cortex. These results are consistent with previous neurophysiological studies employing subjects with and without tinnitus and shed light on both tinnitus mechanism and treatment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.heares.2011.03.010DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3137665PMC
July 2011

Auditory cortical N100 in pre- and post-synaptic auditory neuropathy to frequency or intensity changes of continuous tones.

Clin Neurophysiol 2011 Mar 6;122(3):594-604. Epub 2010 Sep 6.

Evoked Potentials Laboratory, Technion - Israel Institute of Technology, Haifa, Israel.

Objectives: Auditory cortical N100s were examined in ten auditory neuropathy (AN) subjects as objective measures of impaired hearing.

Methods: Latencies and amplitudes of N100 in AN to increases of frequency (4-50%) or intensity (4-8 dB) of low (250 Hz) or high (4000 Hz) frequency tones were compared with results from normal-hearing controls. The sites of auditory nerve dysfunction were pre-synaptic (n=3) due to otoferlin mutations causing temperature sensitive deafness, post-synaptic (n=4) affecting other cranial and/or peripheral neuropathies, and undefined (n=3).

Results: AN consistently had N100s only to the largest changes of frequency or intensity whereas controls consistently had N100s to all but the smallest frequency and intensity changes. N100 latency in AN was significantly delayed compared to controls, more so for 250 than for 4000 Hz and more so for changes of intensity compared to frequency. N100 amplitudes to frequency change were significantly reduced in ANs compared to controls, except for pre-synaptic AN in whom amplitudes were greater than controls. N100 latency to frequency change of 250 but not of 4000 Hz was significantly related to speech perception scores.

Conclusions: As a group, AN subjects' N100 potentials were abnormally delayed and smaller, particularly for low frequency. The extent of these abnormalities differed between pre- and post-synaptic forms of the disorder.

Significance: Abnormalities of auditory cortical N100 in AN reflect disorders of both temporal processing (low frequency) and neural adaptation (high frequency). Auditory N100 latency to the low frequency provides an objective measure of the degree of impaired speech perception in AN.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2010.08.005DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3010502PMC
March 2011

A comparison of auditory evoked potentials to acoustic beats and to binaural beats.

Hear Res 2010 Apr 1;262(1-2):34-44. Epub 2010 Feb 1.

Evoked Potentials Laboratory, Behavioral Biology, Technion - Israel Institute of Technology, Haifa 32000, Israel.

The purpose of this study was to compare cortical brain responses evoked by amplitude modulated acoustic beats of 3 and 6 Hz in tones of 250 and 1000 Hz with those evoked by their binaural beats counterparts in unmodulated tones to indicate whether the cortical processes involved differ. Event-related potentials (ERPs) were recorded to 3- and 6-Hz acoustic and binaural beats in 2000 ms duration 250 and 1000 Hz tones presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to beats-evoked oscillations were determined and compared across beat types, beat frequencies and base (carrier) frequencies. All stimuli evoked tone-onset components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude in response to acoustic than to binaural beats, to 250 than to 1000 Hz base frequency and to 3 Hz than to 6 Hz beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left temporal lobe areas. Differences between estimated sources of potentials to acoustic and binaural beats were not significant. The perceptions of binaural beats involve cortical activity that is not different than acoustic beats in distribution and in the effects of beat- and base frequency, indicating similar cortical processing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.heares.2010.01.013DOI Listing
April 2010

Cortical evoked potentials to an auditory illusion: binaural beats.

Clin Neurophysiol 2009 Aug 18;120(8):1514-24. Epub 2009 Jul 18.

Evoked Potentials Laboratory, Behavioral Biology, Technion - Israel Institute of Technology, Haifa, Israel.

Objective: To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response.

Methods: Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies.

Results: All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions.

Conclusions: Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses.

Significance: Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2009.06.014DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2741401PMC
August 2009

N100 cortical potentials accompanying disrupted auditory nerve activity in auditory neuropathy (AN): effects of signal intensity and continuous noise.

Clin Neurophysiol 2009 Jul 16;120(7):1352-63. Epub 2009 Jun 16.

Department of Neurology, Med. Surge I, Room 150, University of California, Irvine, CA 92697-4290, USA.

Objective: Auditory temporal processes in quiet are impaired in auditory neuropathy (AN) similar to normal hearing subjects tested in noise. N100 latencies were measured from AN subjects at several tone intensities in quiet and noise for comparison with a group of normal hearing individuals.

Methods: Subjects were tested with brief 100 ms tones (1.0 kHz, 100-40 dB SPL) in quiet and in continuous noise (90 dB SPL). N100 latency and amplitude were analyzed as a function of signal intensity and audibility.

Results: N100 latency in AN in quiet was delayed and amplitude was reduced compared to the normal group; the extent of latency delay was related to psychoacoustic measures of gap detection threshold and speech recognition scores, but not to audibility. Noise in normal hearing subjects was accompanied by N100 latency delays and amplitude reductions paralleling those found in AN tested in quiet. Additional N100 latency delays and amplitude reductions occurred in AN with noise.

Conclusions: N100 latency to tones and performance on auditory temporal tasks were related in AN subjects. Noise masking in normal hearing subjects affected N100 latency to resemble AN in quiet.

Significance: N100 latency to tones may serve as an objective measure of the efficiency of auditory temporal processes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2009.05.013DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2751735PMC
July 2009

Intensity changes in a continuous tone: auditory cortical potentials comparison with frequency changes.

Clin Neurophysiol 2009 Feb 27;120(2):374-83. Epub 2008 Dec 27.

Department of Neurology, University of California, 150 Med Surge 1, Irvine, CA 92697, USA.

Objectives: To examine auditory cortical potentials in normal-hearing subjects to intensity increments in a continuous pure tone at low, mid, and high frequency.

Methods: Electrical scalp potentials were recorded in response to randomly occurring 100 ms intensity increments of continuous 250, 1000, and 4000 Hz tones every 1.4 s. The magnitude of intensity change varied between 0, 2, 4, 6, and 8 dB above the 80 dB SPL continuous tone.

Results: Potentials included N100, P200, and a slow negative (SN) wave. N100 latencies were delayed whereas amplitudes were not affected for 250 Hz compared to 1000 and 4000 Hz. Functions relating the magnitude of the intensity change and N100 latency/amplitude did not differ in their slope among the three frequencies. No consistent relationship between intensity increment and SN was observed. Cortical dipole sources for N100 did not differ in location or orientation between the three frequencies.

Conclusions: The relationship between intensity increments and N100 latency/amplitude did not differ between tonal frequencies. A cortical tonotopic arrangement was not observed for intensity increments. Our results are in contrast to prior studies of brain activities to brief frequency changes showing cortical tonotopic organization.

Significance: These results suggest that intensity and frequency discrimination employ distinct central processes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2008.11.009DOI Listing
February 2009

Auditory-evoked potentials to frequency increase and decrease of high- and low-frequency tones.

Clin Neurophysiol 2009 Feb 12;120(2):360-73. Epub 2008 Dec 12.

Evoked Potentials Laboratory, Behavioral Biology, Gutwirth Building, Technion-Israel Institute of Technology, Haifa 32000, Israel.

Objective: To define cortical brain responses to large and small frequency changes (increase and decrease) of high- and low-frequency tones.

Methods: Event-Related Potentials (ERPs) were recorded in response to a 10% or a 50% frequency increase from 250 or 4000 Hz tones that were approximately 3 s in duration and presented at 500-ms intervals. Frequency increase was followed after 1 s by a decrease back to base frequency. Frequency changes occurred at least 1 s before or after tone onset or offset, respectively. Subjects were not attending to the stimuli. Latency, amplitude and source current density estimates of ERPs were compared across frequency changes.

Results: All frequency changes evoked components P(50), N(100), and P(200). N(100) and P(200) had double peaks at bilateral and right temporal sites, respectively. These components were followed by a slow negativity (SN). The constituents of N(100) were predominantly localized to temporo-parietal auditory areas. The potentials and their intracranial distributions were affected by both base frequency (larger potentials to low frequency) and direction of change (larger potentials to increase than decrease), as well as by change magnitude (larger potentials to larger change). The differences between frequency increase and decrease depended on base frequency (smaller difference to high frequency) and were localized to frontal areas.

Conclusions: Brain activity varies according to frequency change direction and magnitude as well as base frequency.

Significance: The effects of base frequency and direction of change may reflect brain networks involved in more complex processing such as speech that are differentially sensitive to frequency modulations of high (consonant discrimination) and low (vowels and prosody) frequencies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2008.10.158DOI Listing
February 2009

Frequency changes in a continuous tone: auditory cortical potentials.

Clin Neurophysiol 2008 Sep 16;119(9):2111-24. Epub 2008 Jul 16.

Department of Neurology, University of California, 150 Med Surge 1, Irvine, CA 92697, USA.

Objective: We examined auditory cortical potentials in normal hearing subjects to spectral changes in continuous low and high frequency pure tones.

Methods: Cortical potentials were recorded to increments of frequency from continuous 250 or 4000Hz tones. The magnitude of change was random and varied from 0% to 50% above the base frequency.

Results: Potentials consisted of N100, P200 and a slow negative wave (SN). N100 amplitude, latency and dipole magnitude with frequency increments were significantly greater for low compared to high frequencies. Dipole amplitudes were greater in the right than left hemisphere for both base frequencies. The SN amplitude to frequency changes between 4% and 50% was not significantly related to the magnitude of spectral change.

Conclusions: Modulation of N100 amplitude and latency elicited by spectral change is more pronounced with low compared to high frequencies.

Significance: These data provide electrophysiological evidence that central processing of spectral changes in the cortex differs for low and high frequencies. Some of these differences may be related to both temporal- and spectral-based coding at the auditory periphery. Central representation of frequency change may be related to the different temporal windows of integration across frequencies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2008.06.002DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2741402PMC
September 2008

Human electrophysiological examination of buildup of the precedence effect.

Neuroreport 2006 Jul;17(11):1133-7

School of Audiology and Speech Sciences, The University of British Columbia, Vancouver, British Columbia, Canada.

Event-related potential correlates of the buildup of precedence effect were examined. Buildup is a type of precedence effect illusion in which perception changes (from hearing two clicks to hearing one click) during a click train. Buildup occurs faster for right-leading than left-leading clicks. Continuous click trains that changed leading sides every 15 clicks were presented. Event-related potential N1 amplitudes became smaller with click train for right-leading only. N1 latency decreased with click trains. Mismatch negativity was seen after lead-lag sides were changed. When the perceived change differed in location (left-to-right), mismatch negativity peaked earlier than when the perceived change differed in location and number of clicks (right-to-left). Results suggest that buildup relates to: N1 refractoriness, event-related potential 'lead domination' and mismatch negativity differences.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/01.wnr.0000223386.44081.ecDOI Listing
July 2006

Estimating audiometric thresholds using auditory steady-state responses.

J Am Acad Audiol 2005 Mar;16(3):140-56

Rotman Research Institute, Baycrest Centre for Geriatric Care, University of Toronto, Canada.

Human auditory steady-state responses (ASSRs) were recorded using stimulus rates of 78-95 Hz in normal young subjects, in elderly subjects with relatively normal hearing, and in elderly subjects with sensorineural hearing impairment. Amplitude-intensity functions calculated relative to actual sensory thresholds (sensation level or SL) showed that amplitudes increased as stimulus intensity increased. In the hearing-impaired subjects this increase was more rapid at intensities just above threshold ("electrophysiological recruitment") than at higher intensities where the increase was similar to that seen in normal subjects. The thresholds in dB SL for recognizing an ASSR and the intersubject variability of these thresholds decreased with increasing recording time and were lower in the hearing impaired compared to the normal subjects. After 9.8 minutes of recording, the average ASSR thresholds (and standard deviations) were 12.6 +/- 8.7 in the normal subjects, 12.4 +/- 11.9 dB in the normal elderly, and 3.6 +/- 13.5 dB SL in the hearing-impaired subjects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3766/jaaa.16.3.3DOI Listing
March 2005

Auditory steady-state responses and word recognition scores in normal-hearing and hearing-impaired adults.

Ear Hear 2004 Feb;25(1):68-84

Rotman Research Institute, Baycrest Centre for Geriatric Care, University of Toronto, Canada.

Objective: The number of steady-state responses evoked by the independent amplitude and frequency modulation (IAFM) of tones has been related to the ability to discriminate speech sounds as measured by word recognition scores (WRS). In the present study IAFM stimulus parameters were adjusted to resemble the acoustic properties of everyday speech to see how well responses to these speech-modeled stimuli were related to WRS.

Design: We separately measured WRS and IAFM responses at a stimulus intensity of 70 dB SPL in three groups of subjects: young normal-hearing, elderly normal-hearing, and elderly hearing-impaired. We used two series of IAFM stimuli, one with modulation frequencies near 40 Hz and the other with modulation frequencies near 80 Hz. The IAFM stimuli, consisting of four carrier frequencies each independently modulated in frequency and amplitude, could evoke up to eight separate responses in one ear. We recorded IAFM responses and WRS measurements in quiet and in the presence of speech-masking noise at 67 dB SPL or 70 dB SPL. We then evaluated the hearing-impaired subjects with and without their hearing aids to see whether an improvement in WRS would be reflected in an increased number of responses to the IAFM stimulus.

Results: The correlations between WRS and the number of IAFM responses recognized as significantly different from the background were between 0.70 and 0.81 for the 40 Hz stimuli, between 0.73 and 0.82 for the 80 Hz stimuli, and between 0.76 and 0.85 for the combined assessment of 40 and 80 Hz responses. Response amplitudes at 80 Hz were smaller in the hearing-impaired than in the normal-hearing subjects. Response amplitudes for the 40 Hz stimuli varied with the state of arousal and this effect made it impossible to compare amplitudes across the different groups. Hearing aids increased both the WRS and the number of significant IAFM responses at 40 Hz and 80 Hz. Masking decreased the WRS and the number of significant responses.

Conclusions: IAFM responses are significantly correlated with WRS and may provide an objective tool for examining the brain's ability to process the auditory information needed to perceive speech.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/01.AUD.0000111545.71693.48DOI Listing
February 2004

Human auditory steady-state responses.

Int J Audiol 2003 Jun;42(4):177-219

Rotman Research Institute, Baycrest Centre for Geriatric Care, University of Toronto, Canada.

Steady-state evoked potentials can be recorded from the human scalp in response to auditory stimuli presented at rates between 1 and 200 Hz or by periodic modulations of the amplitude and/or frequency of a continuous tone. Responses can be objectively detected using frequency-based analyses. In waking subjects, the responses are particularly prominent at rates near 40 Hz. Responses evoked by more rapidly presented stimuli are less affected by changes in arousal and can be evoked by multiple simultaneous stimuli without significant loss of amplitude. Response amplitude increases as the depth of modulation or the intensity increases. The phase delay of the response increases as the intensity or the carrier frequency decreases. Auditory steady-state responses are generated throughout the auditory nervous system, with cortical regions contributing more than brainstem generators to responses at lower modulation frequencies. These responses are useful for objectively evaluating auditory thresholds, assessing suprathreshold hearing, and monitoring the state of arousal during anesthesia.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3109/14992020309101316DOI Listing
June 2003

Advantages and caveats when recording steady-state responses to multiple simultaneous stimuli.

J Am Acad Audiol 2002 May;13(5):246-59

Rotman Research Institute, Baycrest Centre for Geriatric Care, University of Toronto, Ontario.

This article considers the efficiency of evoked potential audiometry using steady-state responses evoked by multiple simultaneous stimuli with carrier frequencies at 500, 1000, 2000, and 4000 Hz. The general principles of signal-to-noise enhancement through averaging provide a basis for determining the time required to estimate thresholds. The advantage of the multiple-stimulus technique over a single-stimulus approach is less than the ratio of the number of stimuli presented. When testing two ears simultaneously, the advantage is typically that the multiple-stimulus technique is two to three times faster. One factor that increases the time of the multiple-response recording is the relatively small size of responses at 500 and 4000 Hz. Increasing the intensities of the 500- and 4000-Hz stimuli by 10 or 20 dB can enhance their responses without significantly changing the other responses. Using multiple simultaneous stimuli causes small changes in the responses compared with when the responses are evoked by single stimuli. The clearest of these interactions is the attenuation of the responses to low-frequency stimuli in the presence of higher-frequency stimuli. Although these interactions are interesting physiologically, their small size means that they do not lessen the advantages of the multiple-stimulus approach.
View Article and Find Full Text PDF

Download full-text PDF

Source
May 2002

Estimating the audiogram using multiple auditory steady-state responses.

J Am Acad Audiol 2002 Apr;13(4):205-24

Rotman Research Institute, Baycrest Centre for Geriatric Care, University of Toronto, Ontario.

Multiple auditory steady-state responses were evoked by eight tonal stimuli (four per ear), with each stimulus simultaneously modulated in both amplitude and frequency. The modulation frequencies varied from 80 to 95 Hz and the carrier frequencies were 500, 1000, 2000, and 4000 Hz. For air conduction, the differences between physiologic thresholds for these mixed-modulation (MM) stimuli and behavioral thresholds for pure tones in 31 adult subjects with a sensorineural hearing impairment and 14 adult subjects with normal hearing were 14+/-11, 5+/-9, 5+/-9, and 9+/-10 dB (correlation coefficients .85, .94, .95, and .95) for the 500-, 1000-, 2000-, and 4000-Hz carrier frequencies, respectively. Similar results were obtained in subjects with simulated conductive hearing losses. Responses to stimuli presented through a forehead bone conductor showed physiologic-behavioral threshold differences of 22+/-8, 14+/-5, 5+/-8, and 5+/-10 dB for the 500-, 1000-, 2000-, and 4000-Hz carrier frequencies, respectively. These responses were attenuated by white noise presented concurrently through the bone conductor.
View Article and Find Full Text PDF

Download full-text PDF

Source
April 2002

Multiple auditory steady-state responses.

Ann Otol Rhinol Laryngol Suppl 2002 May;189:16-21

Rotman Research Institute, Baycrest Centre for Geriatric Care, University of Toronto, Canada.

Steady-state responses are evoked potentials that maintain a stable frequency content over time. In the frequency domain, responses to rapidly presented stimuli show a spectrum with peaks at the rate of stimulation and its harmonics. Auditory steady-state responses can be reliably evoked by tones that have been amplitude-modulated at rates between 75 and 110 Hz. These responses show great promise for objective audiometry, because they can be readily recorded in infants and are unaffected by sleep. Responses to multiple tones presented simultaneously can be independently assessed if each tone is modulated at a different modulation frequency. This ability makes it possible to estimate thresholds at several audiometric frequencies in both ears at the same time. Because amplitude-modulated tones are not significantly distorted by free-field speakers or microphones, they can also be used to evaluate the performance of hearing aids. Responses to amplitude and frequency modulation may also become helpful in assessing suprathreshold auditory processes, such as those necessary for speech perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/00034894021110s504DOI Listing
May 2002
-->