Publications by authors named "Hugo Fastl"

8 Publications

  • Page 1 of 1

Speech Perception With Combined Electric-Acoustic Stimulation: A Simulation and Model Comparison.

Ear Hear 2015 Nov-Dec;36(6):e314-25

1Department of Audiological Acoustics, ENT Department, University Hospital Frankfurt, Frankfurt, Germany; and 2Arbeitsgruppe Technische Akustik, Lehrstuhl für Mensch-Maschine-Kommunikation, Technische Universität München, Munich, Germany.

Objective: The aim of this study is to simulate speech perception with combined electric-acoustic stimulation (EAS), verify the advantage of combined stimulation in normal-hearing (NH) subjects, and then compare it with cochlear implant (CI) and EAS user results from the authors' previous study. Furthermore, an automatic speech recognition (ASR) system was built to examine the impact of low-frequency information and is proposed as an applied model to study different hypotheses of the combined-stimulation advantage. Signal-detection-theory (SDT) models were applied to assess predictions of subject performance without the need to assume any synergistic effects.

Design: Speech perception was tested using a closed-set matrix test (Oldenburg sentence test), and its speech material was processed to simulate CI and EAS hearing. A total of 43 NH subjects and a customized ASR system were tested. CI hearing was simulated by an aurally adequate signal spectrum analysis and representation, the part-tone-time-pattern, which was vocoded at 12 center frequencies according to the MED-EL DUET speech processor. Residual acoustic hearing was simulated by low-pass (LP)-filtered speech with cutoff frequencies 200 and 500 Hz for NH subjects and in the range from 100 to 500 Hz for the ASR system. Speech reception thresholds were determined in amplitude-modulated noise and in pseudocontinuous noise. Previously proposed SDT models were lastly applied to predict NH subject performance with EAS simulations.

Results: NH subjects tested with EAS simulations demonstrated the combined-stimulation advantage. Increasing the LP cutoff frequency from 200 to 500 Hz significantly improved speech reception thresholds in both noise conditions. In continuous noise, CI and EAS users showed generally better performance than NH subjects tested with simulations. In modulated noise, performance was comparable except for the EAS at cutoff frequency 500 Hz where NH subject performance was superior. The ASR system showed similar behavior to NH subjects despite a positive signal-to-noise ratio shift for both noise conditions, while demonstrating the synergistic effect for cutoff frequencies ≥300 Hz. One SDT model largely predicted the combined-stimulation results in continuous noise, while falling short of predicting performance observed in modulated noise.

Conclusions: The presented simulation was able to demonstrate the combined-stimulation advantage for NH subjects as observed in EAS users. Only NH subjects tested with EAS simulations were able to take advantage of the gap listening effect, while CI and EAS user performance was consistently degraded in modulated noise compared with performance in continuous noise. The application of ASR systems seems feasible to assess the impact of different signal processing strategies on speech perception with CI and EAS simulations. In continuous noise, SDT models were largely able to predict the performance gain without assuming any synergistic effects, but model amendments are required to explain the gap listening effect in modulated noise.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000000178DOI Listing
August 2016

Speech perception with combined electric-acoustic stimulation and bilateral cochlear implants in a multisource noise field.

Ear Hear 2013 May-Jun;34(3):324-32

Department of Audiological Acoustics, ENT Department, Goethe-University of Frankfurt, Frankfurt, Germany.

Objective: The aim of the study was to measure and compare speech perception in users of electric-acoustic stimulation (EAS) supported by a hearing aid in the unimplanted ear and in bilateral cochlear implant (CI) users under different noise and sound field conditions. Gap listening was assessed by comparing performance in unmodulated and modulated Comité Consultatif International Téléphonique et Télégraphique (CCITT) noise conditions, and binaural interaction was investigated by comparing single source and multisource sound fields.

Methods: Speech perception in noise was measured using a closed-set sentence test (Oldenburg Sentence Test, OLSA) in a multisource noise field (MSNF) consisting of a four-loudspeaker array with independent noise sources and a single source in frontal position (S0N0). Speech simulating noise (Fastl-noise), CCITT-noise (continuous), and OLSA-noise (pseudo continuous) served as noise sources with different temporal patterns. Speech tests were performed in two groups of subjects who were using either EAS (n = 12) or bilateral CIs (n = 10). All subjects in the EAS group were fitted with a high-power hearing aid in the opposite ear (bimodal EAS). The average group score on monosyllable in quiet was 68.8% (EAS) and 80.5% (bilateral CI). A group of 22 listeners with normal hearing served as controls to compare and evaluate potential gap listening effects in implanted patients.

Results: Average speech reception thresholds in the EAS group were significantly lower than those for the bilateral CI group in all test conditions (CCITT 6.1 dB, p = 0.001; Fastl-noise 5.4 dB, p < 0.01; Oldenburg-(OL)-noise 1.6 dB, p < 0.05). Bilateral CI and EAS user groups showed a significant improvement of 4.3 dB (p = 0.004) and 5.4 dB (p = 0.002) between S0N0 and MSNF sound field conditions respectively, which signifies advantages caused by bilateral interaction in both groups. Performance in the control group showed a significant gap listening effect with a difference of 6.5 dB between modulated and unmodulated noise in S0N0, and a difference of 3.0 dB in MSNF. The ability to "glimpse" into short temporal masker gaps was absent in both groups of implanted subjects.

Conclusions: Combined EAS in one ear supported by a hearing aid on the contralateral ear provided significantly improved speech perception compared with bilateral cochlear implantation. Although the scores for monosyllable words in quiet were higher in the bilateral CI group, the EAS group performed better in different noise and sound field conditions. Furthermore, the results indicated that binaural interaction between EAS in one ear and residual acoustic hearing in the opposite ear enhances speech perception in complex noise situations. Both bilateral CI and bimodal EAS users did not benefit from short temporal masker gaps, therefore the better performance of the EAS group in modulated noise conditions could be explained by the improved transmission of fundamental frequency cues in the lower-frequency region of acoustic hearing, which might foster the grouping of auditory objects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0b013e318272f189DOI Listing
November 2013

Algorithmic modeling of the irrelevant sound effect (ISE) by the hearing sensation fluctuation strength.

Atten Percept Psychophys 2012 Jan;74(1):194-203

Catholic University of Eichstaett-Ingolstadt, Eichstaett, Germany.

Background sounds, such as narration, music with prominent staccato passages, and office noise impair verbal short-term memory even when these sounds are irrelevant. This irrelevant sound effect (ISE) is evoked by so-called changing-state sounds that are characterized by a distinct temporal structure with varying successive auditory-perceptive tokens. However, because of the absence of an appropriate psychoacoustically based instrumental measure, the disturbing impact of a given speech or nonspeech sound could not be predicted until now, but necessitated behavioral testing. Our database for parametric modeling of the ISE included approximately 40 background sounds (e.g., speech, music, tone sequences, office noise, traffic noise) and corresponding performance data that was collected from 70 behavioral measurements of verbal short-term memory. The hearing sensation fluctuation strength was chosen to model the ISE and describes the percept of fluctuations when listening to slowly modulated sounds (f(mod) < 20 Hz). On the basis of the fluctuation strength of background sounds, the algorithm estimated behavioral performance data in 63 of 70 cases within the interquartile ranges. In particular, all real-world sounds were modeled adequately, whereas the algorithm overestimated the (non-)disturbance impact of synthetic steady-state sounds that were constituted by a repeated vowel or tone. Implications of the algorithm's strengths and prediction errors are discussed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13414-011-0230-7DOI Listing
January 2012

Influence of vehicle color on loudness judgments.

J Acoust Soc Am 2008 May;123(5):2477-9

Arbeitsgruppe Technische Akustik, MMK, Technische Universität München, München, Germany.

This experiment investigates the effect of images of differently colored sports cars on the loudness of a simultaneously perceived car sound. Still images of a sports car, colored in red, light green, blue, and dark green, were displayed to subjects during a magnitude estimation task. The sound of an accelerating sports car was used as a stimulus. Statistical analysis suggests that the color of the visual stimulus may have a small influence on loudness judgments. The observed loudness differences are generally equivalent to a change in sound level of about 1 dB, with maximum individual differences of up to 3 dB.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.2890747DOI Listing
May 2008

Localization cues with bilateral cochlear implants.

J Acoust Soc Am 2008 Feb;123(2):1030-42

Institute for Human-Machine-Communication, Technische Universität München Arcisstr. 21, 80333 München, Germany.

Selected subjects with bilateral cochlear implants (CIs) showed excellent horizontal localization of wide-band sounds in previous studies. The current study investigated localization cues used by two bilateral CI subjects with outstanding localization ability. The first experiment studied localization for sounds of different spectral and temporal composition in the free field. Localization of wide-band noise was unaffected by envelope pulsation, suggesting that envelope-interaural time difference (ITD) cues contributed little. Low-pass noise was not localizable for one subject and localization depended on the cutoff frequency for the other which suggests that ITDs played only a limited role. High-pass noise with slow envelope changes could be localized, in line with contribution of interaural level differences (ILDs). In experiment 2, processors of one subject were raised above the head to void the head shadow. If they were spaced at ear distance, ITDs allowed discrimination of left from right for a pulsed wide-band noise. Good localization was observed with a head-sized cardboard inserted between processors, showing the reliance on ILDs. Experiment 3 investigated localization in virtual space with manipulated ILDs and ITDs. Localization shifted predominantly for offsets in ILDs, even for pulsed high-pass noise. This confirms that envelope ITDs contributed little and that localization with bilateral CIs was dominated by ILDs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.2821965DOI Listing
February 2008

Localization ability with bimodal hearing aids and bilateral cochlear implants.

J Acoust Soc Am 2004 Sep;116(3):1698-709

AG Technische Akustik, MMK, Technische Universität München, Arcisstr 21, D-80333 München, Germany.

After successful cochlear implantation in one ear, some patients continue to use a hearing aid at the contralateral ear. They report an improved reception of speech, especially in noise, as well as a better perception of music when the hearing aid and cochlear implant are used in this bimodal combination. Some individuals in this bimodal patient group also report the impression of an improved localization ability. Similar experiences are reported by the group of bilateral cochlear implantees. In this study, a survey of 11 bimodally and 4 bilaterally equipped cochlear implant users was carried out to assess localization ability. Individuals in the bimodal implant group were all provided with the same type of hearing aid in the opposite ear, and subjects in the bilateral implant group used cochlear implants of the same manufacturer on each ear. Subjects adjusted the spot of a computer-controlled laser-pointer to the perceived direction of sound incidence in the frontal horizontal plane by rotating a trackball. Two subjects of the bimodal group who had substantial residual hearing showed localization ability in the bimodal configuration, whereas using each single device only the subject with better residual hearing was able to discriminate the side of sound origin. Five other subjects with more pronounced hearing loss displayed an ability for side discrimination through the use of bimodal aids, while four of them were already able to discriminate the side with a single device. Of the bilateral cochlear implant group one subject showed localization accuracy close to that of normal hearing subjects. This subject was also able to discriminate the side of sound origin using the first implanted device alone. The other three bilaterally equipped subjects showed limited localization ability using both devices. Among them one subject demonstrated a side-discrimination ability using only the first implanted device.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.1776192DOI Listing
September 2004

Zwicker tone illusion and noise reduction in the auditory system.

Phys Rev Lett 2003 May 1;90(17):178103. Epub 2003 May 1.

Physik Department, TU München, 85747 Garching bei München, Munich, Germany.

The Zwicker tone is an auditory aftereffect. For instance, after switching off a broadband noise with a spectral gap, one perceives it as a lingering pure tone with the pitch in the gap. It is a unique illusion in that it cannot be explained by known properties of the auditory periphery alone. Here we introduce a neuronal model explaining the Zwicker tone. We show that a neuronal noise-reduction mechanism in conjunction with dominantly unilateral inhibition explains the effect. A pure tone's "hole burning" in noisy surroundings is given as an illustration.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevLett.90.178103DOI Listing
May 2003

Microsecond temporal resolution in monaural hearing without spectral cues?

J Acoust Soc Am 2003 May;113(5):2790-800

Centre for the Neural Basis of Hearing, Department of Physiology, University of Cambridge, Downing Street, Cambridge CB2 3EG, United Kingdom.

The auditory system encodes the timing of peaks in basilar-membrane motion with exquisite precision, and perceptual models of binaural processing indicate that the limit of temporal resolution in humans is as little as 10-20 microseconds. In these binaural studies, pairs of continuous sounds with microsecond differences are presented simultaneously, one sound to each ear. In this paper, a monaural masking experiment is described in which pairs of continuous sounds with microsecond time differences were combined and presented to both ears. The stimuli were matched in terms of the excitation patterns they produced, and a perceptual model of monaural processing indicates that the limit of temporal resolution in this case is similar to that in the binaural system.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.1547438DOI Listing
May 2003
-->