Publications by authors named "Isabelle Peretz"

177 Publications

Influence of Background Musical Emotions on Attention in Congenital Amusia.

Front Hum Neurosci 2020 25;14:566841. Epub 2021 Jan 25.

International Laboratory for Brain, Music and Sound Research, University of Montreal, Montreal, QC, Canada.

Congenital amusia in its most common form is a disorder characterized by a musical pitch processing deficit. Although pitch is involved in conveying emotion in music, the implications for pitch deficits on musical emotion judgements is still under debate. Relatedly, both limited and spared musical emotion recognition was reported in amusia in conditions where emotion cues were not determined by musical mode or dissonance. Additionally, assumed links between musical abilities and visuo-spatial attention processes need further investigation in congenital amusics. Hence, we here test to what extent musical emotions can influence attentional performance. Fifteen congenital amusic adults and fifteen healthy controls matched for age and education were assessed in three attentional conditions: executive control (distractor inhibition), alerting, and orienting (spatial shift) while music expressing either joy, tenderness, sadness, or tension was presented. Visual target detection was in the normal range for both accuracy and response times in the amusic relative to the control participants. Moreover, in both groups, music exposure produced facilitating effects on selective attention that appeared to be driven by the arousal dimension of musical emotional content, with faster correct target detection during joyful compared to sad music. These findings corroborate the idea that pitch processing deficits related to congenital amusia do not impede other cognitive domains, particularly visual attention. Furthermore, our study uncovers an intact influence of music and its emotional content on the attentional abilities of amusic individuals. The results highlight the domain-selectivity of the pitch disorder in congenital amusia, which largely spares the development of visual attention and affective systems.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2020.566841DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7868440PMC
January 2021

What Makes Musical Prodigies?

Front Psychol 2020 11;11:566373. Epub 2020 Dec 11.

Department of Psychology, International Laboratory for Brain, Music, and Sound Research, University of Montreal, Montreal, QC, Canada.

Musical prodigies reach exceptionally high levels of achievement before adolescence. Despite longstanding interest and fascination in musical prodigies, little is known about their psychological profile. Here we assess to what extent practice, intelligence, and personality make musical prodigies a distinct category of musician. Nineteen former or current musical prodigies (aged 12-34) were compared to 35 musicians (aged 14-37) with either an early (mean age 6) or late (mean age 10) start but similar amount of musical training, and 16 non-musicians (aged 14-34). All completed a Wechsler IQ test, the Big Five Inventory, the Autism Spectrum Quotient, the Barcelona Music Reward Questionnaire, the Dispositional Flow Scale, and a detailed history of their lifetime music practice. None of the psychological traits distinguished musical prodigies from control musicians or non-musicians except their propensity to report flow during practice. The other aspects that differentiated musical prodigies from their peers were the intensity of their practice before adolescence, and the source of their motivation when they began to play. Thus practice, by itself, does not make a prodigy. The results are compatible with multifactorial models of expertise, with prodigies lying at the high end of the continuum. In summary, prodigies are expected to present brain predispositions facilitating their success in learning an instrument, which could be amplified by their early and intense practice happening at a moment when brain plasticity is heightened.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fpsyg.2020.566373DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7759486PMC
December 2020

The singing voice is special: Persistence of superior memory for vocal melodies despite vocal-motor distractions.

Cognition 2020 Nov 24:104514. Epub 2020 Nov 24.

International Laboratory for Brain, Music, and Sound Research (BRAMS), University of Montreal, Montreal, Quebec, Canada.

Vocal melodies sung without lyrics (la la) are remembered better than instrumental melodies. What causes the advantage? One possibility is that vocal music elicits subvocal imitation, which could promote enhanced motor representations of a melody. If this motor interpretation is correct, distracting the motor system during encoding should reduce the memory advantage for vocal over piano melodies. In Experiment 1, participants carried out movements of the mouth (i.e., chew gum) or hand (i.e., squeeze a beanbag) while listening to 24 unfamiliar folk melodies (half vocal, half piano). In a subsequent memory test, they rated the same melodies and 24 timbre-matched foils from '1-Definitely New' to '7-Definitely Old'. There was a memory advantage for vocal over piano melodies with no effect of group and no interaction. In Experiment 2, participants carried out motor activities during encoding more closely related to singing, either silently articulating (la la) or vocalizing without articulating (humming continuously). Once again, there was a significant advantage for vocal melodies with no effect or interaction of group. In Experiment 3, participants audibly whispered (la la) repeatedly during encoding. Again, the voice advantage was present and did not differ appreciably from prior research with no motor task during encoding. However, we observed that the spontaneous phase-locking of whisper rate and musical beat tended to predict enhanced memory for vocal melodies. Altogether the results challenge the notion that subvocal rehearsal of the melody drives enhanced memory for vocal melodies. Instead, the voice may enhance engagement.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cognition.2020.104514DOI Listing
November 2020

Basic timekeeping deficit in the Beat-based Form of Congenital Amusia.

Sci Rep 2020 05 20;10(1):8325. Epub 2020 May 20.

International Laboratory for Brain, Music, and Sound Research, Montréal, Québec, H3C 3J7, Canada.

Humans have the capacity to match movements' timing with the beat of music. Yet some individuals show marked difficulties. The causes of these difficulties remain to be determined. Here, we investigate to what extend a beat synchronization deficit can be traced to basic timekeeping abilities. Eight beat-impaired individuals who were unable to successfully synchronize to the beat of music were compared to matched controls in their ability to tap a self-paced regular beat, to tap to a metronome spanning a large range of tempi (225-1709 ms inter-tone onsets), and to maintain the tempi after the sounds had ceased. Whether paced by a metronome or not, beat-impaired individuals showed poorer regularity (higher variability) in tapping, with an inability to synchronize at a fast tempo (225 ms between beats) or to sustain tapping at slow tempi (above 1 sec). Yet, they showed evidence of predictive and flexible processing. We suggest that the beat impairment is due to imprecise internal timekeeping mechanism.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-65034-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7239916PMC
May 2020

Corrigendum to "Ability to process musical pitch is unrelated to the memory advantage for vocal music" [Brain Cogn. 129 (2019) 35-39].

Brain Cogn 2020 07 12;142:105567. Epub 2020 May 12.

International Laboratory for Brain, Music, and Sound Research, Canada; Department of Psychology, Université de Montréal, Montréal, Canada.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandc.2020.105567DOI Listing
July 2020

Decoding Task-Related Functional Brain Imaging Data to Identify Developmental Disorders: The Case of Congenital Amusia.

Front Neurosci 2019 30;13:1165. Epub 2019 Oct 30.

Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.

Machine learning classification techniques are frequently applied to structural and resting-state fMRI data to identify brain-based biomarkers for developmental disorders. However, task-related fMRI has rarely been used as a diagnostic tool. Here, we used structural MRI, resting-state connectivity and task-based fMRI data to detect congenital amusia, a pitch-specific developmental disorder. All approaches discriminated amusics from controls in meaningful brain networks at similar levels of accuracy. Interestingly, the classifier outcome was specific to deficit-related neural circuits, as the group classification failed for fMRI data acquired during a verbal task for which amusics were unimpaired. Most importantly, classifier outputs of task-related fMRI data predicted individual behavioral performance on an independent pitch-based task, while this relationship was not observed for structural or resting-state data. These results suggest that task-related imaging data can potentially be used as a powerful diagnostic tool to identify developmental disorders as they allow for the prediction of symptom severity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2019.01165DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6831619PMC
October 2019

The effects of short-term musical training on the neural processing of speech-in-noise in older adults.

Brain Cogn 2019 11 9;136:103592. Epub 2019 Aug 9.

Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador A1B 3V6, Canada; International Laboratory for Brain, Music, and Sound Research, Montréal, Québec H3C 3J7, Canada; Centre de Recherche, Institut Universitaire de Gériatrie de Montréal (CRIUGM), Montréal, Québec H3W1W4, Canada; Aging Research Centre - Newfoundland and Labrador, Memorial University of Newfoundland, Corner Brook, Newfoundland and Labrador A2H5G4, Canada. Electronic address:

Experienced musicians outperform non-musicians in understanding speech-in-noise (SPIN). The benefits of lifelong musicianship endure into older age, where musicians experience smaller declines in their ability to understand speech in noisy environments. However, it is presently unknown whether commencing musical training in old age can also counteract age-related decline in speech perception, and whether such training induces changes in neural processing of speech. Here, we recruited older adult non-musicians and assigned them to receive a short course of piano or videogame training, or no training. Participants completed two sessions of functional Magnetic Resonance Imaging where they performed a SPIN task prior to and following training. While we found no direct benefit of musical training upon SPIN perception, an exploratory Region of Interest analysis revealed increased cortical responses to speech in left Middle Frontal and Supramarginal Gyri which correlated with changes in SPIN task performance in the group which received music training. These results suggest that short-term musical training in older adults may enhance neural encoding of speech, with the potential to reduce age-related decline in speech perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandc.2019.103592DOI Listing
November 2019

Musical training improves the ability to understand speech-in-noise in older adults.

Neurobiol Aging 2019 09 29;81:102-115. Epub 2019 May 29.

International Laboratory for Brain, Music, and Sound Research, Montréal, Québec, Canada; Département de Psychologie, Université de Montréal, Québec, Canada.

It is well known that hearing abilities decline with age, and one of the most commonly reported hearing difficulties reported in older adults is a reduced ability to understand speech in noisy environments. Older adult musicians have an enhanced ability to understand speech in noise, and this has been associated with enhanced brain responses related to both speech processing and the deployment of attention; however, the causal impact of music lessons in older adults has not yet been demonstrated. To investigate whether a causal relationship exists between short-term musical training and performance on auditory tests in older adults and to determine if musical training can be used to improve hearing in older adult nonmusicians, we conducted a longitudinal training study with random assignment. A sample of older adults was randomly assigned to learn to play piano (Music), to learn to play a visuospatially demanding video game (Video), or to serve as a no-contact control (No-contact). After 6 months, the Music group improved their ability to understand a word presented in loud background noise, whereas the other 2 groups did not. This improvement was related to an increase in positive-going electrical brain activity at fronto-left electrodes 200-1000 ms after the presentation of a word in noise. Source analyses suggest that this activity was due to sources located in the left inferior frontal gyrus and other regions involved in the speech-motor system. These findings support the idea that musical training provides a causal benefit to hearing abilities. Importantly, these findings suggest that musical training could be used as a foundation to develop auditory rehabilitation programs for older adults.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neurobiolaging.2019.05.015DOI Listing
September 2019

Poor Synchronization to Musical Beat Generalizes to Speech.

Brain Sci 2019 Jul 4;9(7). Epub 2019 Jul 4.

International Laboratory for Brain, Music, and Sound Research, Montreal, QC H3C 3J7, Canada.

The rhythmic nature of speech may recruit entrainment mechanisms in a manner similar to music. In the current study, we tested the hypothesis that individuals who display a severe deficit in synchronizing their taps to a musical beat (called beat-deaf here) would also experience difficulties entraining to speech. The beat-deaf participants and their matched controls were required to align taps with the perceived regularity in the rhythm of naturally spoken, regularly spoken, and sung sentences. The results showed that beat-deaf individuals synchronized their taps less accurately than the control group across conditions. In addition, participants from both groups exhibited more inter-tap variability to natural speech than to regularly spoken and sung sentences. The findings support the idea that acoustic periodicity is a major factor in domain-general entrainment to both music and speech. Therefore, a beat-finding deficit may affect periodic auditory rhythms in general, not just those for music.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/brainsci9070157DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6680836PMC
July 2019

Electrophysiological Responses to Emotional Facial Expressions Following a Mild Traumatic Brain Injury.

Brain Sci 2019 Jun 18;9(6). Epub 2019 Jun 18.

Centre for Interdisciplinary Research in Rehabilitation (CRIR), IURDPM, CIUSSS du Centre-Sud-de-l'Île-de-Montréal, Montreal, QC H3S 2J4, Canada.

The present study aimed to measure neural information processing underlying emotional recognition from facial expressions in adults having sustained a mild traumatic brain injury (mTBI) as compared to healthy individuals. We thus measured early (N1, N170) and later (N2) event-related potential (ERP) components during presentation of fearful, neutral, and happy facial expressions in 10 adults with mTBI and 11 control participants. Findings indicated significant differences between groups, irrespective of emotional expression, in the early attentional stage (N1), which was altered in mTBI. The two groups showed similar perceptual integration of facial features (N170), with greater amplitude for fearful facial expressions in the right hemisphere. At a higher-level emotional discrimination stage (N2), both groups demonstrated preferential processing for fear as compared to happiness and neutrality. These findings suggest a reduced early selective attentional processing following mTBI, but no impact on the perceptual and higher-level cognitive processes stages. This study contributes to further improving our comprehension of attentional versus emotional recognition following a mild TBI.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/brainsci9060142DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6627801PMC
June 2019

Decreased risk of falls in patients attending music sessions on an acute geriatric ward: results from a retrospective cohort study.

BMC Complement Altern Med 2019 Mar 28;19(1):76. Epub 2019 Mar 28.

International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec, H3C 3J7, Canada.

Background: Music has been shown to improve health and quality of life. It was suggested that music may also have an impact on gait stability and fall risk. Yet, few studies have exploited music in the hospital setting, and even less so in the geriatric population. Our objective was to examine the influence of music listening on the risk of falls by comparing the Morse Fall Scale score in patients admitted to a Geriatric Assessment Unit (GAU) who attended music listening sessions and in patients who did not attend music sessions.

Methods: This was a retrospective cohort study (mean follow-up 13.3 ± 6.8 days) which took place in a GAU, St. Mary's Hospital Center, Montreal. A total of 152 charts of participants, with a mean age of 85.7 ± 6.4 years and 88.2% female were reviewed and included. There were 61 participants exposed to the music listening sessions group and 91 in the non-exposed group matched for age, sex, cause and season of admission, and living situation. One-hour music sessions were provided to the patients by volunteer musicians. The Morse Fall Scale score upon admission and discharge as well as its variation (change from before to after exposure) were used as outcomes. Age, sex, living situation, reason for admission, season of admission, Mini Mental Status Examination score, number of therapeutic classes taken daily upon admission, use of psychoactive drugs upon admission and length of stay were used as covariates.

Results: The Morse Fall Scale score decreased significantly in the exposed group compared to the non-exposed group (p = 0.025) and represented a small to medium-sized effect, d = 0.395. The multiple linear regression model showed a significant association between the decrease of the Morse Fall Scale score and music exposure (B = - 17.1, p = 0.043).

Conclusion: Participating in music listening sessions was associated with a decreased risk of falls in patients admitted to a GAU. Further controlled research is necessary to confirm these findings and to determine the mechanisms by which music listening impacts fall risk.

Trial Registration: Clinical trial registry: ClinicalTrials.gov . Registration number: NCT03348657 (November 17th, 2017). Retrospectively registered.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s12906-019-2484-xDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6437846PMC
March 2019

Comorbidity and cognitive overlap between developmental dyslexia and congenital amusia.

Cogn Neuropsychol 2019 Feb - Mar;36(1-2):1-17. Epub 2019 Feb 20.

c Laboratoire de Sciences Cognitives et Psycholinguistique, Département d'Etudes Cognitives, Ecole Normale Supérieure , PSL Research University, EHESS, CNRS , Paris , France.

This study investigated whether there is a co-occurrence between developmental dyslexia and congenital amusia in adults. First, a database of online musical tests on 18,000 participants was analysed. Self-reported dyslexic participants performed significantly lower on melodic skills than matched controls, suggesting a possible link between reading and musical disorders. In order to test this relationship more directly, we evaluated 20 participants diagnosed with dyslexia, 16 participants diagnosed with amusia, and their matched controls, with a whole battery of literacy (reading, fluency, spelling), phonological (verbal working memory, phonological awareness) and musical tests (melody, rhythm and metre perception, incidental memory). Amusia was diagnosed in six (30%) dyslexic participants and reading difficulties were found in four (25%) amusic participants. Thus, the results point to a moderate comorbidity between amusia and dyslexia. Further research will be needed to determine what factors at the neural and/or cognitive levels are responsible for this co-occurrence.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/02643294.2019.1578205DOI Listing
December 2019

The co-occurrence of pitch and rhythm disorders in congenital amusia.

Cortex 2019 04 30;113:229-238. Epub 2018 Dec 30.

International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, Québec, Canada; Department of Psychology, University of Montreal, Quebec, Canada. Electronic address:

The most studied form of congenital amusia is characterized by a difficulty with detecting pitch anomalies in melodies, also referred to as pitch deafness. Here, we tested for the presence of associated deficits in rhythm processing, beat in particular, in pitch deafness. In Experiment 1, participants performed beat perception and production tasks with musical excerpts of various genres. The results show a beat finding disorder in six of the ten assessed pitch-deaf participants. In order to remove a putative interference of pitch variations with beat extraction, the same participants were tested with percussive rhythms in Experiment 2 and showed a similar impairment. Furthermore, musical pitch and beat processing abilities were correlated. These new results highlight the tight connection between melody and rhythm in music processing that can nevertheless dissociate in some individuals.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cortex.2018.11.036DOI Listing
April 2019

Playing Super Mario increases oculomotor inhibition and frontal eye field grey matter in older adults.

Exp Brain Res 2019 Mar 15;237(3):723-733. Epub 2018 Dec 15.

Centre de Recherche en Neuropsychologie et Cogntion, University of Montreal, Pavillon Marie-Victorin 90, Avenue Vincent d'Indy, Montreal, QC, H2V 2S9, Canada.

Aging is associated with cognitive decline and decreased capacity to inhibit distracting information. Video game training holds promise to increase inhibitory mechanisms in older adults. In the current study, we tested the impact of 3D-platform video game training on performance in an antisaccade task and on related changes in grey matter within the frontal eye fields (FEFs) of older adults. An experimental group (VID group) engaged in 3D-platform video game training over a period of 6 months, while an active control group was trained on piano lessons (MUS group), and a no-contact control group did not participate in any intervention (CON group). Increased performance in oculomotor inhibition, as measured by the antisaccade task, and increased grey matter in the right FEF was observed uniquely in the VID group. These results demonstrate that 3D-platform video game training can improve inhibitory control known to decline with age.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00221-018-5453-6DOI Listing
March 2019

Ability to process musical pitch is unrelated to the memory advantage for vocal music.

Brain Cogn 2019 02 3;129:35-39. Epub 2018 Dec 3.

International Laboratory for Brain, Music, and Sound Research, Canada; Department of Psychology, Université de Montréal, Montréal, Canada.

Listeners remember vocal melodies better than instrumental melodies, but the origins of the effect are unclear. One explanation for the 'voice advantage' is that general perceptual mechanisms enhance processing of conspecific signals. An alternative possibility is that the voice, by virtue of its expressiveness in pitch, simply provides more musical information to the listener. Individuals with congenital amusia provide a unique opportunity to disentangle the effects of conspecific status and vocal expressiveness because they cannot readily process subtleties in musical pitch. Forty-one participants whose musical pitch discrimination ability ranged from congenitally amusic to typical were tested. Participants heard vocal and instrumental melodies during an exposure phase, and heard the same melodies intermixed with timbre-matched foils in a recognition phase. Memory was better for vocal than instrumental melodies, but the magnitude of the advantage was unrelated to musical pitch discrimination or memory overall. The voice enhances melodic memory regardless of music perception ability, ruling out the role of pitch expressiveness in the voice advantage. More importantly, listeners across a wide range of musical ability can benefit from the privileged status of the voice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandc.2018.11.011DOI Listing
February 2019

Specialized neural dynamics for verbal and tonal memory: fMRI evidence in congenital amusia.

Hum Brain Mapp 2019 02 1;40(3):855-867. Epub 2018 Nov 1.

Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, CNRS, UMR5292, INSERM, U1028, Lyon, France.

Behavioral and neuropsychological studies have suggested that tonal and verbal short-term memory are supported by specialized neural networks. To date however, neuroimaging investigations have failed to confirm this hypothesis. In this study, we investigated the hypothesis of distinct neural resources for tonal and verbal memory by comparing typical nonmusician listeners to individuals with congenital amusia, who exhibit pitch memory impairments with preserved verbal memory. During fMRI, amusics and matched controls performed delayed-match-to-sample tasks with tones and words and perceptual control tasks with the same stimuli. For tonal maintenance, amusics showed decreased activity in the right auditory cortex, inferior frontal gyrus (IFG) and dorso-lateral-prefrontal cortex (DLPFC). Moreover, they exhibited reduced right-lateralized functional connectivity between the auditory cortex and the IFG during tonal encoding and between the IFG and the DLPFC during tonal maintenance. In contrasts, amusics showed no difference compared with the controls for verbal memory, with activation in the left IFG and left fronto-temporal connectivity. Critically, we observed a group-by-material interaction in right fronto-temporal regions: while amusics recruited these regions less strongly for tonal memory than verbal memory, control participants showed the reversed pattern (tonal > verbal). By benefitting from the rare condition of amusia, our findings suggest specialized cortical systems for tonal and verbal short-term memory in the human brain.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hbm.24416DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6916746PMC
February 2019

Cross-classification of musical and vocal emotions in the auditory cortex.

Ann N Y Acad Sci 2018 May 9. Epub 2018 May 9.

Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada.

Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains-the Montreal Affective Voices and the Musical Emotional Bursts-which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion-classification fMRI analysis involving cross-timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above-chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross-classifying schemes, albeit with a slight accuracy drop when crossing the voice-music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/nyas.13666DOI Listing
May 2018

Random Feedback Makes Listeners Tone-Deaf.

Sci Rep 2018 05 8;8(1):7283. Epub 2018 May 8.

International Laboratory for Brain, Music, and Sound Research (BRAMS), 1430 boulevard Mont Royal, Montreal, QC, H2V 2J2, Canada.

The mental representation of pitch structure (tonal knowledge) is a core component of musical experience and is learned implicitly through exposure to music. One theory of congenital amusia (tone deafness) posits that conscious access to tonal knowledge is disrupted, leading to a severe deficit of music cognition. We tested this idea by providing random performance feedback to neurotypical listeners while they listened to melodies for tonal incongruities and had their electrical brain activity monitored. The introduction of random feedback was associated with a reduction of accuracy and confidence, and a suppression of the late positive brain response usually elicited by conscious detection of a tonal violation. These effects mirror the behavioural and neurophysiological profile of amusia. In contrast, random feedback was associated with an increase in the amplitude of the early right anterior negativity, possibly due to heightened attention to the experimental task. This successful simulation of amusia in a normal brain highlights the key role of feedback in learning, and thereby provides a new avenue for the rehabilitation of learning disorders.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-018-25518-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5940714PMC
May 2018

Dancing to "groovy" music enhances the experience of flow.

Ann N Y Acad Sci 2018 May 6. Epub 2018 May 6.

Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec, Canada.

We investigated whether dancing influences the emotional response to music, compared to when music is listened to in the absence of movement. Forty participants without previous dance training listened to "groovy" and "nongroovy" music excerpts while either dancing or refraining from movement. Participants were also tested while imitating their own dance movements, but in the absence of music as a control condition. Emotion ratings and ratings of flow were collected following each condition. Dance movements were recorded using motion capture. We found that the state of flow was increased specifically during spontaneous dance to groovy excerpts, compared with both still listening and motor imitation. Emotions in the realms of vitality (such as joy and power) and sublimity (such as wonder and nostalgia) were evoked by music in general, whether participants moved or not. Significant correlations were found between the emotional and flow responses to music and whole-body acceleration profiles. Thus, the results highlight a distinct state of flow when dancing, which may be of use to promote well-being and to address certain clinical conditions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/nyas.13644DOI Listing
May 2018

Neurophysiological and Behavioral Differences between Older and Younger Adults When Processing Violations of Tonal Structure in Music.

Front Neurosci 2018 13;12:54. Epub 2018 Feb 13.

International Laboratory for Brain, Music, and Sound Research, Montréal, QC, Canada.

Aging is associated with decline in both cognitive and auditory abilities. However, evidence suggests that music perception is relatively spared, despite relying on auditory and cognitive abilities that tend to decline with age. It is therefore likely that older adults engage compensatory mechanisms which should be evident in the underlying functional neurophysiology related to processing music. In other words, the perception of musical structure would be similar or enhanced in older compared to younger adults, while the underlying functional neurophysiology would be different. The present study aimed to compare the electrophysiological brain responses of younger and older adults to melodic incongruities during a passive and active listening task. Older and younger adults had a similar ability to detect an out-of-tune incongruity (i.e., non-chromatic), while the amplitudes of the ERAN and P600 were reduced in older adults compared to younger adults. On the other hand, out-of-key incongruities (i.e., non-diatonic), were better detected by older adults compared to younger adults, while the ERAN and P600 were comparable between the two age groups. This pattern of results indicates that perception of tonal structure is preserved in older adults, despite age-related neurophysiological changes in how melodic violations are processed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2018.00054DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5816823PMC
February 2018

Enhancement of Pleasure during Spontaneous Dance.

Front Hum Neurosci 2017 29;11:572. Epub 2017 Nov 29.

International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.

Dancing emphasizes the motor expression of emotional experiences. The bodily expression of emotions can modulate the subjective experience of emotions, as when adopting emotion-specific postures and faces. Thus, dancing potentially offers a ground for emotional coping through emotional enhancement and regulation. Here we investigated the emotional responses to music in individuals without any prior dance training while they either freely danced or refrained from movement. Participants were also tested while imitating their own dance movements but in the absence of music as a control condition. Emotional ratings and cardio-respiratory measures were collected following each condition. Dance movements were recorded using motion capture. We found that emotional valence was increased specifically during spontaneous dance of groovy excerpts, compared to both still listening and motor imitation. Furthermore, parasympathetic-related heart rate variability (HRV) increased during dance compared to motor imitation. Nevertheless, subjective and physiological arousal increased during movement production, regardless of whether participants were dancing or imitating. Significant correlations were found between inter-individual differences in the emotions experienced during dance and whole-body acceleration profiles. The combination of movement and music during dance results in a distinct state characterized by acutely heightened pleasure, which is of potential interest for the use of dance in therapeutic settings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2017.00572DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712678PMC
November 2017

Playing Super Mario 64 increases hippocampal grey matter in older adults.

PLoS One 2017 6;12(12):e0187779. Epub 2017 Dec 6.

Department of Psychology, University of Montreal and Institut universitaire de gériatrie de Montréal, QC, Canada.

Maintaining grey matter within the hippocampus is important for healthy cognition. Playing 3D-platform video games has previously been shown to promote grey matter in the hippocampus in younger adults. In the current study, we tested the impact of 3D-platform video game training (i.e., Super Mario 64) on grey matter in the hippocampus, cerebellum, and the dorsolateral prefrontal cortex (DLPFC) of older adults. Older adults who were 55 to 75 years of age were randomized into three groups. The video game experimental group (VID; n = 8) engaged in a 3D-platform video game training over a period of 6 months. Additionally, an active control group took a series of self-directed, computerized music (piano) lessons (MUS; n = 12), while a no-contact control group did not engage in any intervention (CON; n = 13). After training, a within-subject increase in grey matter within the hippocampus was significant only in the VID training group, replicating results observed in younger adults. Active control MUS training did, however, lead to a within-subject increase in the DLPFC, while both the VID and MUS training produced growth in the cerebellum. In contrast, the CON group displayed significant grey matter loss in the hippocampus, cerebellum and the DLPFC.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0187779PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5718432PMC
January 2018

Modulation of electric brain responses evoked by pitch deviants through transcranial direct current stimulation.

Neuropsychologia 2018 01 26;109:63-74. Epub 2017 Nov 26.

Département de Psychologie, Université de Montréal, Québec, Canada; International Laboratory for Brain, Music and Sound Research (BRAMS), Université de Montréal, Québec, Canada; Center of Research on Brain Language and Music (CRBLM), McGill University, Québec, Canada. Electronic address:

Congenital amusia is a neurodevelopmental disorder, characterized by a difficulty detecting pitch deviation that is related to abnormal electrical brain responses. Abnormalities found along the right fronto-temporal pathway between the inferior frontal gyrus (IFG) and the auditory cortex (AC) are the likely neural mechanism responsible for amusia. To investigate the causal role of these regions during the detection of pitch deviants, we applied cathodal (inhibitory) transcranial direct current stimulation (tDCS) over right frontal and right temporal regions during separate testing sessions. We recorded participants' electrical brain activity (EEG) before and after tDCS stimulation while they performed a pitch change detection task. Relative to a sham condition, there was a decrease in P3 amplitude after cathodal stimulation over both frontal and temporal regions compared to pre-stimulation baseline. This decrease was associated with small pitch deviations (6.25 cents), but not large pitch deviations (200 cents). Overall, this demonstrates that using tDCS to disrupt regions around the IFG and AC can induce temporary changes in evoked brain activity when processing pitch deviants. These electrophysiological changes are similar to those observed in amusia and provide causal support for the connection between P3 and fronto-temporal brain regions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuropsychologia.2017.11.028DOI Listing
January 2018

Feeling the Beat: Bouncing Synchronization to Vibrotactile Music in Hearing and Early Deaf People.

Front Neurosci 2017 12;11:507. Epub 2017 Sep 12.

International Laboratory for Brain, Music, and SoundMontreal, QC, Canada.

The ability to dance relies on the ability to synchronize movements to a perceived musical beat. Typically, beat synchronization is studied with auditory stimuli. However, in many typical social dancing situations, music can also be perceived as vibrations when objects that generate sounds also generate vibrations. This vibrotactile musical perception is of particular relevance for deaf people, who rely on non-auditory sensory information for dancing. In the present study, we investigated beat synchronization to vibrotactile electronic dance music in hearing and deaf people. We tested seven deaf and 14 hearing individuals on their ability to bounce in time with the tempo of vibrotactile stimuli (no sound) delivered through a vibrating platform. The corresponding auditory stimuli (no vibrations) were used in an additional condition in the hearing group. We collected movement data using a camera-based motion capture system and subjected it to a phase-locking analysis to assess synchronization quality. The vast majority of participants were able to precisely time their bounces to the vibrations, with no difference in performance between the two groups. In addition, we found higher performance for the auditory condition compared to the vibrotactile condition in the hearing group. Our results thus show that accurate tactile-motor synchronization in a dance-like context occurs regardless of auditory experience, though auditory-motor synchronization is of superior quality.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2017.00507DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5601036PMC
September 2017

Pre-target neural oscillations predict variability in the detection of small pitch changes.

PLoS One 2017 18;12(5):e0177836. Epub 2017 May 18.

McConnell Brain Imaging Center, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.

Pitch discrimination is important for language or music processing. Previous studies indicate that auditory perception depends on pre-target neural activity. However, so far the pre-target electrophysiological conditions which enable the detection of small pitch changes are not well studied, but might yield important insights into pitch-processing. We used magnetoencephalography (MEG) source imaging to reveal the pre-target effects of successful auditory detection of small pitch deviations from a sequence of standard tones. Participants heard a sequence of four pure tones and had to determine whether the last target tone was different or identical to the first three standard sounds. We found that successful pitch change detection could be predicted from the amplitude of theta (4-8 Hz) oscillatory activity in the right inferior frontal gyrus (IFG) as well as beta (12-30 Hz) oscillatory activity in the right auditory cortex. These findings confirm and extend evidence for the involvement of theta as well as beta-band activity in auditory perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0177836PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5436812PMC
September 2017

Effect of Age on Attentional Control in Dual-Tasking.

Exp Aging Res 2017 Mar-Apr;43(2):161-177

a Research Centre , Institut universitaire de gériatrie de Montréal , Montreal , Quebec , Canada.

Background/Study Context: The age-related differences in divided attention and attentional control have been associated with several negative outcomes later in life. However, numerous questions remain unanswered regarding the nature of these age differences and the role of attentional control abilities in dual-tasking. The aim of this study was to evaluate the sources for age differences in dual-tasking and more specifically: (1) whether they occur because of differences in attentional control skills, or (2) whether the age-related decrement in dual-tasking is due to a general resource reduction that would affect the ability to complete any demanding task.

Methods: In two experiments, young and older adults were required to combine an auditory digit span task and a visuospatial tracking task, for which performance was individually adjusted on each task. In Experiment 1, attentional control skills were measured by instructing participants to deliberately vary attentional priority between the two tasks. In Experiment 2, resource availability was measured by varying the level of difficulty of the visuospatial tracking task in a parametric manner by increasing the speed of the target to be tracked.

Results: Both experiments confirmed the presence of a larger dual-task cost in older adults than in young adults. In Experiment 1, older participants were unable to vary their performance according to task instructions compared with younger adults. Experiment 2 showed that the age-related difference in dual-task cost was not amplified by a variation in difficulty.

Conclusion: A marked age-related difference was found in the ability to control attentional focus in response to task instructions. However, increasing resource demand in a parametric manner does not increase the age-related differences in dual-tasking, suggesting that the difficulties experienced by older adults cannot be entirely accounted for by an increased competition for resources. A reduction in attentional control skills is proposed to account for the divided attention deficit reported in aging.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/0361073X.2017.1276377DOI Listing
June 2017

Prevalence of congenital amusia.

Eur J Hum Genet 2017 05 22;25(5):625-630. Epub 2017 Feb 22.

BRAMS Laboratory and Department of Psychology, University of Montreal, Montreal, QC, Canada.

Congenital amusia (commonly known as tone deafness) is a lifelong musical disorder that affects 4% of the population according to a single estimate based on a single test from 1980. Here we present the first large-based measure of prevalence with a sample of 20 000 participants, which does not rely on self-referral. On the basis of three objective tests and a questionnaire, we show that (a) the prevalence of congenital amusia is only 1.5%, with slightly more females than males, unlike other developmental disorders where males often predominate; (b) self-disclosure is a reliable index of congenital amusia, which suggests that congenital amusia is hereditary, with 46% first-degree relatives similarly affected; (c) the deficit is not attenuated by musical training and (d) it emerges in relative isolation from other cognitive disorder, except for spatial orientation problems. Hence, we suggest that congenital amusia is likely to result from genetic variations that affect musical abilities specifically.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/ejhg.2017.15DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5437896PMC
May 2017

Recording the human brainstem frequency-following-response in the free-field.

J Neurosci Methods 2017 03 7;280:47-53. Epub 2017 Feb 7.

International Laboratory for Brain, Music and Sound Research (BRAMS www.brams.org), Outremont, QC, Canada; Center for Research on Brain, Language and Music (CRBLM crblm.ca), Montreal, QC, Canada; University of Montreal, Psychology Department, Montreal, QC, Canada; Department of Otolaryngology Head & Neck Surgery, McGill University, Montreal, QC, Canada.

Background: The human auditory brainstem frequency-following response (FFR) is an objective measure used to investigate the brainstem's encoding ability of sounds. Traditionally, FFRs are recorded under close-field conditions (earphones), but free-field stimulations (loudspeaker) have yet to be attempted, which would increase the applications of FFRs by making this technique accessible to those who cannot wear inserted transducers. Here we test the feasibility and reliability of measuring speech ABRs across free and close-field.

New Method: The FFR was evoked by a 40-ms consonant-vowel (cv) /da/ syllable which was presented in the standard close-field conditions with insert earphones, and in a novel free-field condition via a loudspeaker.

Results: A well-defined FFR was observed for each stimulating method (free or close-field). We show that it is possible and reliable to elicit FFRs from a speaker and that these do not systematically differ from those elicited by conventional earphones.

Comparison With Existing Method: Neural responses were subjected to a comparative within-subjects analysis, using standard measures found in the literature in order to quantify and compare the intrinsic (amplitude, noise, consistency), acoustic (latency, spectral amplitude) and reliability properties (intraclass correlation coefficients and Bland and Altman limits of agreement) of the neural signal.

Conclusions: Reliable FFRs can be elicited using free-field presentation with comparable to acoustical, intrinsic and reliability properties as those elicited by standard close-field presentations.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jneumeth.2017.01.016DOI Listing
March 2017

Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

Ear Hear 2017 Jul/Aug;38(4):455-464

From the 1Department of Psychology, Ryerson University, Toronto, Ontario, Canada; 2Department of Otolaryngology, Cochlear Implant Program, The Hospital for Sick Children, Toronto, Ontario, Canada; 3International Laboratory for Brain, Music and Sound Research, University of Montreal, Montreal, Quebec, Canada; and 4Toronto Rehabilitation Institute, Toronto, Ontario, Canada.

Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception.

Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions.

Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements.

Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000000402DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5483983PMC
May 2018

Emotional recognition from dynamic facial, vocal and musical expressions following traumatic brain injury.

Brain Inj 2017 24;31(2):221-229. Epub 2016 Oct 24.

a Centre de recherche interdisciplinaire en réadaptation (CRIR) - Centre de réadaptation Lucie-Bruneau (CRLB).

Objectives: To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains.

Methods: Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain.

Results: Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains.

Conclusions: This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/02699052.2016.1208846DOI Listing
January 2018