Publications by authors named "Gavin M Bidelman"

106 Publications

Dichotic listening deficits in amblyaudia are characterized by aberrant neural oscillations in auditory cortex.

Clin Neurophysiol 2021 Jun 4;132(9):2152-2162. Epub 2021 Jun 4.

School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.

Objective: Children diagnosed with auditory processing disorder (APD) show deficits in processing complex sounds that are associated with difficulties in higher-order language, learning, cognitive, and communicative functions. Amblyaudia (AMB) is a subcategory of APD characterized by abnormally large ear asymmetries in dichotic listening tasks.

Methods: Here, we examined frequency-specific neural oscillations and functional connectivity via high-density electroencephalography (EEG) in children with and without AMB during passive listening of nonspeech stimuli.

Results: Time-frequency maps of these "brain rhythms" revealed stronger phase-locked beta-gamma (~35 Hz) oscillations in AMB participants within bilateral auditory cortex for sounds presented to the right ear, suggesting a hypersynchronization and imbalance of auditory neural activity. Brain-behavior correlations revealed neural asymmetries in cortical responses predicted the larger than normal right-ear advantage seen in participants with AMB. Additionally, we found weaker functional connectivity in the AMB group from right to left auditory cortex, despite their stronger neural responses overall.

Conclusion: Our results reveal abnormally large auditory sensory encoding and an imbalance in communication between cerebral hemispheres (ipsi- to -contralateral signaling) in AMB.

Significance: These neurophysiological changes might lead to the functionally poorer behavioral capacity to integrate information between the two ears in children with AMB.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2021.04.022DOI Listing
June 2021

Attention reinforces human corticofugal system to aid speech perception in noise.

Neuroimage 2021 07 29;235:118014. Epub 2021 Mar 29.

Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA. Electronic address:

Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2021.118014DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8274701PMC
July 2021

Speech categorization is better described by induced rather than evoked neural activity.

J Acoust Soc Am 2021 03;149(3):1644

School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA.

Categorical perception (CP) describes how the human brain categorizes speech despite inherent acoustic variability. We examined neural correlates of CP in both evoked and induced electroencephalogram (EEG) activity to evaluate which mode best describes the process of speech categorization. Listeners labeled sounds from a vowel gradient while we recorded their EEGs. Using a source reconstructed EEG, we used band-specific evoked and induced neural activity to build parameter optimized support vector machine models to assess how well listeners' speech categorization could be decoded via whole-brain and hemisphere-specific responses. We found whole-brain evoked β-band activity decoded prototypical from ambiguous speech sounds with ∼70% accuracy. However, induced γ-band oscillations showed better decoding of speech categories with ∼95% accuracy compared to evoked β-band activity (∼70% accuracy). Induced high frequency (γ-band) oscillations dominated CP decoding in the left hemisphere, whereas lower frequencies (θ-band) dominated the decoding in the right hemisphere. Moreover, feature selection identified 14 brain regions carrying induced activity and 22 regions of evoked activity that were most salient in describing category-level speech representations. Among the areas and neural regimes explored, induced γ-band modulations were most strongly associated with listeners' behavioral CP. The data suggest that the category-level organization of speech is dominated by relatively high frequency induced brain rhythms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1121/10.0003572DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8267855PMC
March 2021

Data-driven machine learning models for decoding speech categorization from evoked brain responses.

J Neural Eng 2021 Mar 9. Epub 2021 Mar 9.

School of Communication Sciences & Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, UNITED STATES.

Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e., differentiates phonetic prototypes from ambiguous speech sounds). We recorded high density EEGs as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine (SVM) classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event related potentials (ERPs). We found that early (120 ms) whole-brain data decoded speech categories (i.e., prototypical vs. ambiguous speech tokens) with 95.16% accuracy [area under the curve (AUC) 95.14%; F1-score 95.00%]. Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more robust and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions (including auditory cortex, supramarginal gyrus, and Brocas area) that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, Broca's area, motor cortex) were necessary to describe later decision stages (later 300 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e., slope of behavioral identification functions). Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (~120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1741-2552/abecf0DOI Listing
March 2021

Auditory cortex is susceptible to lexical influence as revealed by informational vs. energetic masking of speech categorization.

Brain Res 2021 May 23;1759:147385. Epub 2021 Feb 23.

Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA. Electronic address:

Speech perception requires the grouping of acoustic information into meaningful phonetic units via the process of categorical perception (CP). Environmental masking influences speech perception and CP. However, it remains unclear at which stage of processing (encoding, decision, or both) masking affects listeners' categorization of speech signals. The purpose of this study was to determine whether linguistic interference influences the early acoustic-phonetic conversion process inherent to CP. To this end, we measured source level, event related brain potentials (ERPs) from auditory cortex (AC) and inferior frontal gyrus (IFG) as listeners rapidly categorized speech sounds along a /da/ to /ga/ continuum presented in three listening conditions: quiet, and in the presence of forward (informational masker) and time-reversed (energetic masker) 2-talker babble noise. Maskers were matched in overall SNR and spectral content and thus varied only in their degree of linguistic interference (i.e., informational masking). We hypothesized a differential effect of informational versus energetic masking on behavioral and neural categorization responses, where we predicted increased activation of frontal regions when disambiguating speech from noise, especially during lexical-informational maskers. We found (1) informational masking weakens behavioral speech phoneme identification above and beyond energetic masking; (2) low-level AC activity not only codes speech categories but is susceptible to higher-order lexical interference; (3) identifying speech amidst noise recruits a cross hemispheric circuit (AC → IFG) whose engagement varies according to task difficulty. These findings provide corroborating evidence for top-down influences on the early acoustic-phonetic analysis of speech through a coordinated interplay between frontotemporal brain areas.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.brainres.2021.147385DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8049334PMC
May 2021

Subcortical rather than cortical sources of the frequency-following response (FFR) relate to speech-in-noise perception in normal-hearing listeners.

Neurosci Lett 2021 02 23;746:135664. Epub 2021 Jan 23.

School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.

Scalp-recorded frequency-following responses (FFRs) reflect a mixture of phase-locked activity across the auditory pathway. FFRs have been widely used as a neural barometer of complex listening skills, especially speech-in noise (SIN) perception. Applying individually optimized source reconstruction to speech-FFRs recorded via EEG (FFR), we assessed the relative contributions of subcortical [auditory nerve (AN), brainstem/midbrain (BS)] and cortical [bilateral primary auditory cortex, PAC] source generators with the aim of identifying which source(s) drive the brain-behavior relation between FFRs and SIN listening skills. We found FFR strength declined precipitously from AN to PAC, consistent with diminishing phase-locking along the ascending auditory neuroaxis. FFRs to the speech fundamental (F0) were robust to noise across sources, but were largest in subcortical sources (BS > AN > PAC). PAC FFRs were only weakly observed above the noise floor and only at the low pitch of speech (F0≈100 Hz). Brain-behavior regressions revealed (i) AN and BS FFRs were sufficient to describe listeners' QuickSIN scores and (ii) contrary to neuromagnetic (MEG) FFRs, neither left nor right PAC FFR related to SIN performance. Our findings suggest subcortical sources not only dominate the electrical FFR but also the link between speech-FFRs and SIN processing in normal-hearing adults as observed in previous EEG studies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neulet.2021.135664DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7897268PMC
February 2021

Lexical Influences on Categorical Speech Perception Are Driven by a Temporoparietal Circuit.

J Cogn Neurosci 2021 Jan 19:1-13. Epub 2021 Jan 19.

University of Memphis, TN.

Categorical judgments of otherwise identical phonemes are biased toward hearing words (i.e., "Ganong effect") suggesting lexical context influences perception of even basic speech primitives. Lexical biasing could manifest via late stage postperceptual mechanisms related to decision or, alternatively, top-down linguistic inference that acts on early perceptual coding. Here, we exploited the temporal sensitivity of EEG to resolve the spatiotemporal dynamics of these context-related influences on speech categorization. Listeners rapidly classified sounds from a /gɪ/-/kɪ/ gradient presented in opposing word-nonword contexts ( vs. ), designed to bias perception toward lexical items. Phonetic perception shifted toward the direction of words, establishing a robust Ganong effect behaviorally. ERPs revealed a neural analog of lexical biasing emerging within ~200 msec. Source analyses uncovered a distributed neural network supporting the Ganong including middle temporal gyrus, inferior parietal lobe, and middle frontal cortex. Yet, among Ganong-sensitive regions, only left middle temporal gyrus and inferior parietal lobe predicted behavioral susceptibility to lexical influence. Our findings confirm lexical status rapidly constrains sublexical categorical representations for speech within several hundred milliseconds but likely does so outside the purview of canonical auditory-sensory brain areas.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/jocn_a_01678DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8286983PMC
January 2021

Auditory cortex supports verbal working memory capacity.

Neuroreport 2021 01;32(2):163-168

Department of Physiology, McGill University.

Working memory (WM) is a fundamental construct of human cognition. The neural basis of auditory WM is thought to reflect a distributed brain network consisting of canonical memory and central executive brain regions including frontal lobe and hippocampus. Yet, the role of auditory (sensory) cortex in supporting active memory representations remains controversial. Here, we recorded neuroelectric activity via electroencephalogram as listeners actively performed an auditory version of the Sternberg memory task. Memory load was taxed by parametrically manipulating the number of auditory tokens (letter sounds) held in memory. Source analysis of scalp potentials showed that sustained neural activity maintained in auditory cortex (AC) prior to memory retrieval closely scaled with behavioral performance. Brain-behavior correlations revealed that lateralized modulations in left (but not right) AC were predictive of individual differences in auditory WM capacity. Our findings confirm a prominent role of AC, traditionally viewed as a sensory-perceptual processor, in actively maintaining memory traces and dictating individual differences in behavioral WM limits.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/WNR.0000000000001570DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7790888PMC
January 2021

Musicians Show Improved Speech Segregation in Competitive, Multi-Talker Cocktail Party Scenarios.

Front Psychol 2020 18;11:1927. Epub 2020 Aug 18.

School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.

Studies suggest that long-term music experience enhances the brain's ability to segregate speech from noise. Musicians' "speech-in-noise (SIN) benefit" is based largely on perception from simple figure-ground tasks rather than competitive, multi-talker scenarios that offer realistic spatial cues for segregation and engage binaural processing. We aimed to investigate whether musicians show perceptual advantages in cocktail party speech segregation in a competitive, multi-talker environment. We used the coordinate response measure (CRM) paradigm to measure speech recognition and localization performance in musicians vs. non-musicians in a simulated 3D cocktail party environment conducted in an anechoic chamber. Speech was delivered through a 16-channel speaker array distributed around the horizontal soundfield surrounding the listener. Participants recalled the color, number, and perceived location of target callsign sentences. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (0-1-2-3-4-6-8 multi-talkers). Musicians obtained faster and better speech recognition amidst up to around eight simultaneous talkers and showed less noise-related decline in performance with increasing interferers than their non-musician peers. Correlations revealed associations between listeners' years of musical training and CRM recognition and working memory. However, better working memory correlated with better speech streaming. Basic (QuickSIN) but not more complex (speech streaming) SIN processing was still predicted by music training after controlling for working memory. Our findings confirm a relationship between musicianship and naturalistic cocktail party speech streaming but also suggest that cognitive factors at least partially drive musicians' SIN advantage.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fpsyg.2020.01927DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7461890PMC
August 2020

Decoding Hearing-Related Changes in Older Adults' Spatiotemporal Neural Processing of Speech Using Machine Learning.

Front Neurosci 2020 16;14:748. Epub 2020 Jul 16.

Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.

Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal ( in time) and spatial ( in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the and that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2020.00748DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7378401PMC
July 2020

Seizure localization using EEG analytical signals.

Clin Neurophysiol 2020 09 25;131(9):2131-2139. Epub 2020 Jun 25.

Department of Neurology, University of Tennessee Health Science Center, Memphis, TN, USA.

Objective: Localization of epileptic seizures, usually characterized by abnormal hypersynchronous wave patterns from the cortex, remains elusive. We present a novel, robust method for automatic localization of seizures on the scalp from clinical electroencephalogram (EEG) data.

Methods: Seizure patient EEG data was decomposed via the Hilbert Transform and processed through the following methodology: sorting the analytic amplitude (AA) in the time instance, locating the maximum amplitude within the vector of channels, cross-correlating amplitude values in the time index with the channel vector. The channel with highest AA value in time was located.

Results: Our approach provides an automated way to isolate the epi-genesis of seizure events with 93.3% precision and 100% sensitivity. The method differentiates seizure-related neural activity from other common EEG noise artifacts (e.g., blinks, myogenic noise).

Conclusions: We evaluated performance characteristics of our source location methodology utilizing both phase and energy of EEG signals from patients who exhibited seizure events. Feasibility of the new algorithm is demonstrated and confirmed.

Significance: The proposed method contributes to high-performance scalp localization for seizure events that is more straightforward and less computationally intensive than other methods (e.g., inverse source modeling). Ultimately, it may aid clinicians in providing improved patient diagnosis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.clinph.2020.05.034DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7437564PMC
September 2020

Brainstem correlates of cochlear nonlinearity measured via the scalp-recorded frequency-following response.

Neuroreport 2020 07;31(10):702-707

Department of Communicative Disorders and Sciences.

The frequency-following response (FFR) is an EEG-based potential used to characterize the brainstem encoding of complex sounds. Adopting techniques from auditory signal processing, we assessed the degree to which FFRs encode important properties of cochlear processing (e.g. nonlinearities) and their relation to speech-in-noise (SIN) listening skills. Based on the premise that normal cochlear transduction is characterized by rectification and compression, we reasoned these nonlinearities would create measurable harmonic distortion in FFRs in response to even pure tone input. We recorded FFRs to nonspeech (pure- and amplitude-modulated-tones) stimuli in normal-hearing individuals. We then compared conventional indices of cochlear nonlinearity, via distortion product otoacoustic emission (DPOAE) I/O functions, to total harmonic distortion measured from neural FFRs (FFRTHD). Analysis of DPOAE growth and the FFRTHD revealed listeners with higher cochlear compression thresholds had lower neural FFRTHD distortion (i.e. more linear FFRs), thus linking cochlear and brainstem correlates of auditory nonlinearity. Importantly, FFRTHD was also negatively correlated with SIN perception whereby listeners with higher FFRTHD (i.e. more nonlinear responses) showed better performance on the QuickSIN. We infer individual differences in SIN perception and FFR nonlinearity even in normal-hearing individuals may reflect subtle differences in auditory health and suprathreshold hearing skills not captured by normal audiometric evaluation. Future studies in hearing-impaired individuals and animal models are necessary to confirm the diagnostic utility of FFRTHD and its relation to cochlear hearing loss or peripheral neurodegeneration in humans.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/WNR.0000000000001452DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7275900PMC
July 2020

Effects of Noise on the Behavioral and Neural Categorization of Speech.

Front Neurosci 2020 27;14:153. Epub 2020 Feb 27.

School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.

We investigated whether the categorical perception (CP) of speech might also provide a mechanism that aids its perception in noise. We varied signal-to-noise ratio (SNR) [clear, 0 dB, -5 dB] while listeners classified an acoustic-phonetic continuum (/u/ to /a/). Noise-related changes in behavioral categorization were only observed at the lowest SNR. Event-related brain potentials (ERPs) differentiated category vs. category-ambiguous speech by the P2 wave (~180-320 ms). Paralleling behavior, neural responses to speech with clear phonetic status (i.e., continuum endpoints) were robust to noise down to -5 dB SNR, whereas responses to ambiguous tokens declined with decreasing SNR. Results demonstrate that phonetic speech representations are more resistant to degradation than corresponding acoustic representations. Findings suggest the mere process of binning speech sounds into categories provides a robust mechanism to aid figure-ground speech perception by fortifying abstract categories from the acoustic signal and making the speech code more resistant to external interferences.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2020.00153DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7057933PMC
February 2020

Corrigendum: Autonomic Nervous System Correlates of Speech Categorization Revealed Through Pupillometry.

Front Neurosci 2020;14:132. Epub 2020 Feb 19.

Institute for Intelligent Systems, The University of Memphis, Memphis, TN, United States.

[This corrects the article DOI: 10.3389/fnins.2019.01418.].
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2020.00132DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7042635PMC
February 2020

Autonomic Nervous System Correlates of Speech Categorization Revealed Through Pupillometry.

Front Neurosci 2019 10;13:1418. Epub 2020 Jan 10.

Institute for Intelligent Systems, The University of Memphis, Memphis, TN, United States.

Human perception requires the many-to-one mapping between continuous sensory elements and discrete categorical representations. This grouping operation underlies the phenomenon of categorical perception (CP)-the experience of perceiving discrete categories rather than gradual variations in signal input. Speech perception requires CP because acoustic cues do not share constant relations with perceptual-phonetic representations. Beyond facilitating perception of unmasked speech, we reasoned CP might also aid the extraction of target speech percepts from interfering sound sources (i.e., noise) by generating additional perceptual constancy and reducing listening effort. Specifically, we investigated how noise interference impacts cognitive load and perceptual identification of unambiguous (i.e., categorical) vs. ambiguous stimuli. Listeners classified a speech vowel continuum (/u/-/a/) at various signal-to-noise ratios (SNRs [unmasked, 0 and -5 dB]). Continuous recordings of pupil dilation measured processing effort, with larger, later dilations reflecting increased listening demand. Critical comparisons were between time-locked changes in eye data in response to unambiguous (i.e., continuum endpoints) tokens vs. ambiguous tokens (i.e., continuum midpoint). Unmasked speech elicited faster responses and sharper psychometric functions, which steadily declined in noise. Noise increased pupil dilation across stimulus conditions, but not straightforwardly. Noise-masked speech modulated peak pupil size (i.e., [0 and -5 dB] > unmasked). In contrast, peak dilation latency varied with both token and SNR. Interestingly, categorical tokens elicited earlier pupil dilation relative to ambiguous tokens. Our pupillary data suggest CP reconstructs auditory percepts under challenging listening conditions through interactions between stimulus salience and listeners' internalized effort and/or arousal.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2019.01418DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6967406PMC
January 2020

Auditory categorical processing for speech is modulated by inherent musical listening skills.

Neuroreport 2020 01;31(2):162-166

School of Communication Sciences and Disorders.

During successful auditory perception, the human brain classifies diverse acoustic information into meaningful groupings, a process known as categorical perception (CP). Intense auditory experiences (e.g., musical training and language expertise) shape categorical representations necessary for speech identification and novel sound-to-meaning learning, but little is known concerning the role of innate auditory function in CP. Here, we tested whether listeners vary in their intrinsic abilities to categorize complex sounds and individual differences in the underlying auditory brain mechanisms. To this end, we recorded EEGs in individuals without formal music training but who differed in their inherent auditory perceptual abilities (i.e., musicality) as they rapidly categorized sounds along a speech vowel continuum. Behaviorally, individuals with naturally more adept listening skills ("musical sleepers") showed enhanced speech categorization in the form of faster identification. At the neural level, inverse modeling parsed EEG data into different sources to evaluate the contribution of region-specific activity [i.e., auditory cortex (AC)] to categorical neural coding. We found stronger categorical processing in musical sleepers around the timeframe of P2 (~180 ms) in the right AC compared to those with poorer musical listening abilities. Our data show that listeners with naturally more adept auditory skills map sound to meaning more efficiently than their peers, which may aid novel sound learning related to language and music acquisition.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/WNR.0000000000001369DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6957750PMC
January 2020

Decoding of single-trial EEG reveals unique states of functional brain connectivity that drive rapid speech categorization decisions.

J Neural Eng 2020 02 5;17(1):016045. Epub 2020 Feb 5.

Department of Electrical and Computer Engineering, University of Memphis, Memphis, TN, United States of America.

Objective: Categorical perception (CP) is an inherent property of speech perception. The response time (RT) of listeners' perceptual speech identification is highly sensitive to individual differences. While the neural correlates of CP have been well studied in terms of the regional contributions of the brain to behavior, functional connectivity patterns that signify individual differences in listeners' speed (RT) for speech categorization is less clear. In this study, we introduce a novel approach to address these questions.

Approach: We applied several computational approaches to the EEG, including graph mining, machine learning (i.e., support vector machine), and stability selection to investigate the unique brain states (functional neural connectivity) that predict the speed of listeners' behavioral decisions.

Main Results: We infer that (i) the listeners' perceptual speed is directly related to dynamic variations in their brain connectomics, (ii) global network assortativity and efficiency distinguished fast, medium, and slow RTs, (iii) the functional network underlying speeded decisions increases in negative assortativity (i.e., became disassortative) for slower RTs, (iv) slower categorical speech decisions cause excessive use of neural resources and more aberrant information flow within the CP circuitry, (v) slower responders tended to utilize functional brain networks excessively (or inappropriately) whereas fast responders (with lower global efficiency) utilized the same neural pathways but with more restricted organization.

Significance: Findings show that neural classifiers (SVM) coupled with stability selection correctly classify behavioral RTs from functional connectivity alone with over 92% accuracy (AUC  =  0.9). Our results corroborate previous studies by supporting the engagement of similar temporal (STG), parietal, motor, and prefrontal regions in CP using an entirely data-driven approach.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/1741-2552/ab6040DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7004853PMC
February 2020

Auditory-frontal Channeling in α and β Bands is Altered by Age-related Hearing Loss and Relates to Speech Perception in Noise.

Neuroscience 2019 12 6;423:18-28. Epub 2019 Nov 6.

School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.

Difficulty understanding speech-in-noise (SIN) is a pervasive problem faced by older adults particularly those with hearing loss. Previous studies have identified structural and functional changes in the brain that contribute to older adults' speech perception difficulties. Yet, many of these studies use neuroimaging techniques that evaluate only gross activation in isolated brain regions. Neural oscillations may provide further insight into the processes underlying SIN perception as well as the interaction between auditory cortex and prefrontal linguistic brain regions that mediate complex behaviors. We examined frequency-specific neural oscillations and functional connectivity of the EEG in older adults with and without hearing loss during an active SIN perception task. Brain-behavior correlations revealed listeners who were more resistant to the detrimental effects of noise also demonstrated greater modulation of α phase coherence between clean and noise-degraded speech, suggesting α desynchronization reflects release from inhibition and more flexible allocation of neural resources. Additionally, we found top-down β connectivity between prefrontal and auditory cortices strengthened with poorer hearing thresholds despite minimal behavioral differences. This is consistent with the proposal that linguistic brain areas may be recruited to compensate for impoverished auditory inputs through increased top-down predictions to assist SIN perception. Overall, these results emphasize the importance of top-down signaling in low-frequency brain rhythms that help compensate for hearing-related declines and facilitate efficient SIN processing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroscience.2019.10.044DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6900454PMC
December 2019

Frontal cortex selectively overrides auditory processing to bias perception for looming sonic motion.

Brain Res 2020 01 10;1726:146507. Epub 2019 Oct 10.

University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.

Rising intensity sounds signal approaching objects traveling toward an observer. A variety of species preferentially respond to looming over receding auditory motion, reflecting an evolutionary perceptual bias for recognizing approaching threats. We probed the neural origins of this stark perceptual anisotropy to reveal how the brain creates privilege for auditory looming events. While recording neural activity via electroencephalography (EEG), human listeners rapidly judged whether dynamic (intensity varying) tones were looming or receding in percept. Behaviorally, listeners responded faster to auditory looms confirming a perceptual bias for approaching signals. EEG source analysis revealed sensory activation localized to primary auditory cortex (PAC) and decision-related activity in prefrontal cortex (PFC) within 200 ms after sound onset followed by additional expansive PFC activation by 500 ms. Notably, early PFC (but not PAC) activity rapidly differentiated looming and receding stimuli and this effect roughly co-occurred with sound arrival in auditory cortex. Brain-behavior correlations revealed an association between PFC neural latencies and listeners' speed of sonic motion judgments. Directed functional connectivity revealed stronger information flow from PFC → PAC during looming vs. receding sounds. Our electrophysiological data reveal a critical, previously undocumented role of prefrontal cortex in judging dynamic sonic motion. Both faster neural bias and a functional override of obligatory sensory processing via selective, directional PFC signaling toward auditory system establish the perceptual privilege for approaching looming sounds.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.brainres.2019.146507DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6898789PMC
January 2020

Afferent-efferent connectivity between auditory brainstem and cortex accounts for poorer speech-in-noise comprehension in older adults.

Hear Res 2019 10 27;382:107795. Epub 2019 Aug 27.

Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada; University of Toronto, Department of Psychology, Toronto, Ontario, Canada; University of Toronto, Institute of Medical Sciences, Toronto, Ontario, Canada.

Speech-in-noise (SIN) comprehension deficits in older adults have been linked to changes in both subcortical and cortical auditory evoked responses. However, older adults' difficulty understanding SIN may also be related to an imbalance in signal transmission (i.e., functional connectivity) between brainstem and auditory cortices. By modeling high-density scalp recordings of speech-evoked responses with sources in brainstem (BS) and bilateral primary auditory cortices (PAC), we show that beyond attenuating neural activity, hearing loss in older adults compromises the transmission of speech information between subcortical and early cortical hubs of the speech network. We found that the strength of afferent BS→PAC neural signaling (but not the reverse efferent flow; PAC→BS) varied with mild declines in hearing acuity and this "bottom-up" functional connectivity robustly predicted older adults' performance in a SIN identification task. Connectivity was also a better predictor of SIN processing than unitary subcortical or cortical responses alone. Our neuroimaging findings suggest that in older adults (i) mild hearing loss differentially reduces neural output at several stages of auditory processing (PAC > BS), (ii) subcortical-cortical connectivity is more sensitive to peripheral hearing loss than top-down (cortical-subcortical) control, and (iii) reduced functional connectivity in afferent auditory pathways plays a significant role in SIN comprehension problems.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.heares.2019.107795DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6778515PMC
October 2019

Acoustic noise and vision differentially warp the auditory categorization of speech.

J Acoust Soc Am 2019 07;146(1):60

School of Communication Sciences & Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA.

Speech perception requires grouping acoustic information into meaningful linguistic-phonetic units via categorical perception (CP). Beyond shrinking observers' perceptual space, CP might aid degraded speech perception if categories are more resistant to noise than surface acoustic features. Combining audiovisual (AV) cues also enhances speech recognition, particularly in noisy environments. This study investigated the degree to which visual cues from a talker (i.e., mouth movements) aid speech categorization amidst noise interference by measuring participants' identification of clear and noisy speech (0 dB signal-to-noise ratio) presented in auditory-only or combined AV modalities (i.e., A, A+noise, AV, AV+noise conditions). Auditory noise expectedly weakened (i.e., shallower identification slopes) and slowed speech categorization. Interestingly, additional viseme cues largely counteracted noise-related decrements in performance and stabilized classification speeds in both clear and noise conditions suggesting more precise acoustic-phonetic representations with multisensory information. Results are parsimoniously described under a signal detection theory framework and by a reduction (visual cues) and increase (noise) in the precision of perceptual object representation, which were not due to lapses of attention or guessing. Collectively, findings show that (i) mapping sounds to categories aids speech perception in "cocktail party" environments; (ii) visual cues help lattice formation of auditory-phonetic categories to enhance and refine speech identification.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.5114822DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6786888PMC
July 2019

Age-related hearing loss increases full-brain connectivity while reversing directed signaling within the dorsal-ventral pathway for speech.

Brain Struct Funct 2019 Nov 25;224(8):2661-2676. Epub 2019 Jul 25.

Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, ON, Canada.

Speech comprehension difficulties are ubiquitous to aging and hearing loss, particularly in noisy environments. Older adults' poorer speech-in-noise (SIN) comprehension has been related to abnormal neural representations within various nodes (regions) of the speech network, but how senescent changes in hearing alter the transmission of brain signals remains unspecified. We measured electroencephalograms in older adults with and without mild hearing loss during a SIN identification task. Using functional connectivity and graph-theoretic analyses, we show that hearing-impaired (HI) listeners have more extended (less integrated) communication pathways and less efficient information exchange among widespread brain regions (larger network eccentricity) than their normal-hearing (NH) peers. Parameter optimized support vector machine classifiers applied to EEG connectivity data showed hearing status could be decoded (> 85% accuracy) solely using network-level descriptions of brain activity, but classification was particularly robust using left hemisphere connections. Notably, we found a reversal in directed neural signaling in left hemisphere dependent on hearing status among specific connections within the dorsal-ventral speech pathways. NH listeners showed an overall net "bottom-up" signaling directed from auditory cortex (A1) to inferior frontal gyrus (IFG; Broca's area), whereas the HI group showed the reverse signal (i.e., "top-down" Broca's → A1). A similar flow reversal was noted between left IFG and motor cortex. Our full-brain connectivity results demonstrate that even mild forms of hearing loss alter how the brain routes information within the auditory-linguistic-motor loop.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00429-019-01922-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6778722PMC
November 2019

Plasticity in auditory categorization is supported by differential engagement of the auditory-linguistic network.

Neuroimage 2019 11 13;201:116022. Epub 2019 Jul 13.

Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Department of Psychology, University of Memphis, Memphis, TN, USA; Department of Mathematical Sciences, University of Memphis, Memphis, TN, USA.

To construct our perceptual world, the brain categorizes variable sensory cues into behaviorally-relevant groupings. Categorical representations are apparent within a distributed fronto-temporo-parietal brain network but how this neural circuitry is shaped by experience remains undefined. Here, we asked whether speech and music categories might be formed within different auditory-linguistic brain regions depending on listeners' auditory expertise. We recorded EEG in highly skilled (musicians) vs. less experienced (nonmusicians) perceivers as they rapidly categorized speech and musical sounds. Musicians showed perceptual enhancements across domains, yet source EEG data revealed a double dissociation in the neurobiological mechanisms supporting categorization between groups. Whereas musicians coded categories in primary auditory cortex (PAC), nonmusicians recruited non-auditory regions (e.g., inferior frontal gyrus, IFG) to generate category-level information. Functional connectivity confirmed nonmusicians' increased left IFG involvement reflects stronger routing of signal from PAC directed to IFG, presumably because sensory coding is insufficient to construct categories in less experienced listeners. Our findings establish auditory experience modulates specific engagement and inter-regional communication in the auditory-linguistic network supporting categorical perception. Whereas early canonical PAC representations are sufficient to generate categories in highly trained ears, less experienced perceivers broadcast information downstream to higher-order linguistic brain areas (IFG) to construct abstract sound labels.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2019.116022DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6765438PMC
November 2019

Psychobiological Responses Reveal Audiovisual Noise Differentially Challenges Speech Recognition.

Ear Hear 2020 Mar/Apr;41(2):268-277

School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee, USA.

Objectives: In noisy environments, listeners benefit from both hearing and seeing a talker, demonstrating audiovisual (AV) cues enhance speech-in-noise (SIN) recognition. Here, we examined the relative contribution of auditory and visual cues to SIN perception and the strategies used by listeners to decipher speech in noise interference(s).

Design: Normal-hearing listeners (n = 22) performed an open-set speech recognition task while viewing audiovisual TIMIT sentences presented under different combinations of signal degradation including visual (AVn), audio (AnV), or multimodal (AnVn) noise. Acoustic and visual noises were matched in physical signal-to-noise ratio. Eyetracking monitored participants' gaze to different parts of a talker's face during SIN perception.

Results: As expected, behavioral performance for clean sentence recognition was better for A-only and AV compared to V-only speech. Similarly, with noise in the auditory channel (AnV and AnVn speech), performance was aided by the addition of visual cues of the talker regardless of whether the visual channel contained noise, confirming a multimodal benefit to SIN recognition. The addition of visual noise (AVn) obscuring the talker's face had little effect on speech recognition by itself. Listeners' eye gaze fixations were biased toward the eyes (decreased at the mouth) whenever the auditory channel was compromised. Fixating on the eyes was negatively associated with SIN recognition performance. Eye gazes on the mouth versus eyes of the face also depended on the gender of the talker.

Conclusions: Collectively, results suggest listeners (1) depend heavily on the auditory over visual channel when seeing and hearing speech and (2) alter their visual strategy from viewing the mouth to viewing the eyes of a talker with signal degradations, which negatively affects speech perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000000755DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6939137PMC
July 2019

Acoustic Correlates and Adult Perceptions of Distress in Infant Speech-Like Vocalizations and Cries.

Front Psychol 2019 29;10:1154. Epub 2019 May 29.

School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States.

Prior research has not evaluated acoustic features contributing to perception of human infant vocal distress or lack thereof on a continuum. The present research evaluates perception of infant vocalizations along a continuum ranging from the most prototypical intensely distressful cry sounds ("wails") to the most prototypical of infant sounds that typically express no distress (non-distress "vocants"). Wails are deemed little if at all related to speech while vocants are taken to be clear precursors to speech. We selected prototypical exemplars of utterances representing the whole continuum from 0 and 1 month-olds. In this initial study of the continuum, our goals are to determine (1) listener agreement on level of vocal distress across the continuum, (2) acoustic parameters predicting ratings of distress, (3) the extent to which individual listeners maintain or change their acoustic criteria for distress judgments across the study, (4) the extent to which different listeners use similar or different acoustic criteria to make judgments, and (5) the role of short-term experience among the listeners in judgments of infant vocalization distress. Results indicated that (1) both inter-rater and intra-rater listener agreement on degree of vocal distress was high, (2) the best predictors of vocal distress were number of vibratory regimes within utterances, utterance duration, spectral ratio (spectral concentration) in vibratory regimes within utterances, and mean pitch, (3) individual listeners significantly modified their acoustic criteria for distress judgments across the 10 trial blocks, (4) different listeners, while showing overall similarities in ratings of the 42 stimuli, also showed significant differences in acoustic criteria used in assigning the ratings of vocal distress, and (5) listeners who were both experienced and inexperienced in infant vocalizations coding showed high agreement in rating level of distress, but differed in the extent to which they relied on the different acoustic cues in making the ratings. The study provides clearer characterization of vocal distress expression in infants based on acoustic parameters and a new perspective on active adult perception of infant vocalizations. The results also highlight the importance of vibratory regime segmentation and analysis in acoustically based research on infant vocalizations and their perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fpsyg.2019.01154DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6548812PMC
May 2019

Predicting Speech Recognition Using the Speech Intelligibility Index and Other Variables for Cochlear Implant Users.

J Speech Lang Hear Res 2019 05;62(5):1517-1531

School of Communication Sciences and Disorders, University of Memphis, TN.

Purpose Although the speech intelligibility index (SII) has been widely applied in the field of audiology and other related areas, application of this metric to cochlear implants (CIs) has yet to be investigated. In this study, SIIs for CI users were calculated to investigate whether the SII could be an effective tool for predicting speech perception performance in a population with CI. Method Fifteen pre- and postlingually deafened adults with CI participated. Speech recognition scores were measured using the AzBio sentence lists. CI users also completed questionnaires and performed psychoacoustic (spectral and temporal resolution) and cognitive function (digit span) tests. Obtained SIIs were compared with predicted SIIs using a transfer function curve. Correlation and regression analyses were conducted on perceptual and demographic predictor variables to investigate the association between these factors and speech perception performance. Result Because of the considerably poor hearing and large individual variability in performance, the SII did not predict speech performance for this CI group using the traditional calculation. However, new SII models were developed incorporating predictive factors, which improved the accuracy of SII predictions in listeners with CI. Conclusion Conventional SII models are not appropriate for predicting speech perception scores for CI users. Demographic variables (aided audibility and duration of deafness) and perceptual-cognitive skills (gap detection and auditory digit span outcomes) are needed to improve the use of the SII for listeners with CI. Future studies are needed to improve our CI-corrected SII model by considering additional predictive factors. Supplemental Material https://doi.org/10.23641/asha.8057003.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1044/2018_JSLHR-H-18-0303DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6808321PMC
May 2019

A Single-Channel EEG-Based Approach to Detect Mild Cognitive Impairment via Speech-Evoked Brain Responses.

IEEE Trans Neural Syst Rehabil Eng 2019 05 18;27(5):1063-1070. Epub 2019 Apr 18.

Mild cognitive impairment (MCI) is the preliminary stage of dementia, which may lead to Alzheimer's disease (AD) in the elderly people. Therefore, early detection of MCI has the potential to minimize the risk of AD by ensuring the proper mental health care before it is too late. In this paper, we demonstrate a single-channel EEG-based MCI detection method, which is cost-effective and portable, and thus suitable for regular home-based patient monitoring. We collected the scalp EEG data from 23 subjects, while they were stimulated with five auditory speech signals. The cognitive state of the subjects was evaluated by the Montreal cognitive assessment test (MoCA). We extracted 590 features from the event-related potential (ERP) of the collected EEG signals, which included time and spectral domain characteristics of the response. The top 25 features, ranked by the random forest method, were used for classification models to identify subjects with MCI. Robustness of our model was tested using leave-one-out cross-validation while training the classifiers. Best results (leave-one-out cross-validation accuracy 87.9%, sensitivity 84.8%, specificity 95%, and F score 85%) were obtained using support vector machine (SVM) method with radial basis kernel (RBF) (sigma = 10/cost = 10 ). Similar performances were also observed with logistic regression (LR), further validating the results. Our results suggest that single-channel EEG could provide a robust biomarker for early detection of MCI.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNSRE.2019.2911970DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6554026PMC
May 2019

Linguistic, perceptual, and cognitive factors underlying musicians' benefits in noise-degraded speech perception.

Hear Res 2019 06 29;377:189-195. Epub 2019 Mar 29.

School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA. Electronic address:

Previous studies have reported better speech-in-noise (SIN) recognition in musicians relative to nonmusicians while others have failed to observe this "musician SIN advantage." Here, we aimed to clarify equivocal findings and determine the most relevant perceptual and cognitive factors that do and do not account for musicians' benefits in SIN processing. We measured behavioral performance in musicians and nonmusicians on a battery of SIN recognition, auditory backward masking (a marker of attention), fluid intelligence (IQ), and working memory tasks. We found that musicians outperformed nonmusicians in SIN recognition but also demonstrated better performance in IQ, working memory, and attention. SIN advantages were restricted to more complex speech tasks featuring sentence-level recognition with speech-on-speech masking (i.e., QuickSIN) whereas no group differences were observed in non-speech simultaneous (noise-on-tone) masking. This suggests musicians' advantage is limited to cases where the noise interference is linguistic in nature. Correlations showed SIN scores were associated with working memory, reinforcing the importance of general cognition to degraded speech perception. Lastly, listeners' years of music training predicted auditory attention scores, working memory skills, general fluid intelligence, and SIN perception (i.e., QuickSIN scores), implying that extensive musical training enhances perceptual and cognitive skills. Overall, our results suggest (i) enhanced SIN recognition in musicians is due to improved parsing of competing linguistic signals rather than signal-in-noise extraction, per se, and (ii) cognitive factors (working memory, attention, IQ) at least partially drive musicians' SIN advantages.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.heares.2019.03.021DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6511496PMC
June 2019

Music and Visual Art Training Modulate Brain Activity in Older Adults.

Front Neurosci 2019 8;13:182. Epub 2019 Mar 8.

Digital Health Hub, School of Engineering Science, Simon Fraser University, Surrey, BC, Canada.

Cognitive decline is an unavoidable aspect of aging that impacts important behavioral and cognitive skills. Training programs can improve cognition, yet precise characterization of the psychological and neural underpinnings supporting different training programs is lacking. Here, we assessed the effect and maintenance (3-month follow-up) of 3-month music and visual art training programs on neuroelectric brain activity in older adults using a partially randomized intervention design. During the pre-, post-, and follow-up test sessions, participants completed a brief neuropsychological assessment. High-density EEG was measured while participants were presented with auditory oddball paradigms (piano tones, vowels) and during a visual GoNoGo task. Neither training program significantly impacted psychometric measures, compared to a non-active control group. However, participants enrolled in the music and visual art training programs showed enhancement of auditory evoked responses to piano tones that persisted for up to 3 months after training ended, suggesting robust and long-lasting neuroplastic effects. Both music and visual art training also modulated visual processing during the GoNoGo task, although these training effects were relatively short-lived and disappeared by the 3-month follow-up. Notably, participants enrolled in the visual art training showed greater changes in visual evoked response (i.e., N1 wave) amplitude distribution than those from the music or control group. Conversely, those enrolled in music showed greater response associated with inhibitory control over the right frontal scalp areas than those in the visual art group. Our findings reveal a causal relationship between art training (music and visual art) and neuroplastic changes in sensory systems, with some of the neuroplastic changes being specific to the training regimen.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2019.00182DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6418041PMC
March 2019

Brainstem correlates of concurrent speech identification in adverse listening conditions.

Brain Res 2019 07 20;1714:182-192. Epub 2019 Feb 20.

School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA. Electronic address:

When two voices compete, listeners can segregate and identify concurrent speech sounds using pitch (fundamental frequency, F0) and timbre (harmonic) cues. Speech perception is also hindered by the signal-to-noise ratio (SNR). How clear and degraded concurrent speech sounds are represented at early, pre-attentive stages of the auditory system is not well understood. To this end, we measured scalp-recorded frequency-following responses (FFR) from the EEG while human listeners heard two concurrently presented, steady-state (time-invariant) vowels whose F0 differed by zero or four semitones (ST) presented diotically in either clean (no noise) or noise-degraded (+5dB SNR) conditions. Listeners also performed a speeded double vowel identification task in which they were required to identify both vowels correctly. Behavioral results showed that speech identification accuracy increased with F0 differences between vowels, and this perceptual F0 benefit was larger for clean compared to noise degraded (+5dB SNR) stimuli. Neurophysiological data demonstrated more robust FFR F0 amplitudes for single compared to double vowels and considerably weaker responses in noise. F0 amplitudes showed speech-on-speech masking effects, along with a non-linear constructive interference at 0ST, and suppression effects at 4ST. Correlations showed that FFR F0 amplitudes failed to predict listeners' identification accuracy. In contrast, FFR F1 amplitudes were associated with faster reaction times, although this correlation was limited to noise conditions. The limited number of brain-behavior associations suggests subcortical activity mainly reflects exogenous processing rather than perceptual correlates of concurrent speech perception. Collectively, our results demonstrate that FFRs reflect pre-attentive coding of concurrent auditory stimuli that only weakly predict the success of identifying concurrent speech.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.brainres.2019.02.025DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6727209PMC
July 2019
-->