Publications by authors named "Francesco Pavani"

69 Publications

Eye-movement patterns to social and non-social cues in early deaf adults.

Q J Exp Psychol (Hove) 2021 Mar 17:1747021821998511. Epub 2021 Mar 17.

Centre for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy.

Previous research on covert orienting to the periphery suggested that early profound deaf adults were less susceptible to uninformative gaze-cues, though were equally or more affected by non-social arrow-cues. The aim of this work was to investigate whether spontaneous eye movement behaviour helps explain the reduced impact of the social cue in deaf adults. We tracked the gaze of 25 early profound deaf and 25 age-matched hearing observers performing a peripheral discrimination task with uninformative central cues (gaze vs arrow), stimulus-onset asynchrony (250 vs 750 ms), and cue validity (valid vs invalid) as within-subject factors. In both groups, the cue effect on reaction time (RT) was comparable for the two cues, although deaf observers responded significantly slower than hearing controls. While deaf and hearing observers' eye movement pattern looked similar when the cue was presented in isolation, deaf participants made significantly more eye movements than hearing controls once the discrimination target appeared. Notably, further analysis of eye movements in the deaf group revealed that independent of the cue type, cue validity affected saccade landing position, while latency was not modulated by these factors. Saccade landing position was also strongly related to the magnitude of the validity effect on RT, such that the greater the difference in saccade landing position between invalid and valid trials, the greater the difference in manual RT between invalid and valid trials. This work suggests that the contribution of overt selection in central cueing of attention is more prominent in deaf adults and helps determine the manual performance, irrespective of the cue type.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/1747021821998511DOI Listing
March 2021

Reaching to sounds in virtual reality: A multisensory-motor approach to promote adaptation to altered auditory cues.

Neuropsychologia 2020 12 29;149:107665. Epub 2020 Oct 29.

IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy.

When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuropsychologia.2020.107665DOI Listing
December 2020

Updating spatial hearing abilities through multisensory and motor cues.

Cognition 2020 11 24;204:104409. Epub 2020 Jul 24.

Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy; IMPACT, Centre de Recherche en Neurosciences Lyon (CRNL), France; Department of Psychology and Cognitive Science, Universiy of Trento, Italy.

Spatial hearing relies on a series of mechanisms for associating auditory cues with positions in space. When auditory cues are altered, humans, as well as other animals, can update the way they exploit auditory cues and partially compensate for their spatial hearing difficulties. In two experiments, we simulated monaural listening in hearing adults by temporarily plugging and muffing one ear, to assess the effects of active or passive training conditions. During active training, participants moved an audio-bracelet attached to their wrist, while continuously attending to the position of the sounds it produced. During passive training, participants received identical acoustic stimulation and performed exactly the same task, but the audio-bracelet was moved by the experimenter. Before and after training, we measured adaptation to monaural listening in three auditory tasks: single sound localization, minimum audible angle (MAA), spatial and temporal bisection. We also performed the tests twice in an untrained group, which completed the same auditory tasks but received no training. Results showed that participants significantly improved in single sound localization, across 3 consecutive days, but more in the active compared to the passive training group. This reveals that benefits of kinesthetic cues are additive with respect to those of paying attention to the position of sounds and/or seeing their positions when updating spatial hearing. The observed adaptation did not generalize to other auditory spatial tasks (space bisection and MAA), suggesting that partial updating of sound-space correspondences does not extend to all aspects of spatial hearing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cognition.2020.104409DOI Listing
November 2020

Certain, but incorrect: on the relation between subjective certainty and accuracy in sound localisation.

Exp Brain Res 2020 Mar 20;238(3):727-739. Epub 2020 Feb 20.

Centre for Mind/Brain Sciences (CIMeC), University of Trento, Via Angelo Bettini 31, 38068, Rovereto, TN, Italy.

When asked to identify the position of a sound, listeners can report its perceived location as well as their subjective certainty about this spatial judgement. Yet, research to date focused primarily on measures of perceived location (e.g., accuracy and precision of pointing responses), neglecting instead the phenomenological experience of subjective spatial certainty. The present study aimed to investigate: (1) changes in subjective certainty about sound position induced by listening with one ear plugged (simulated monaural listening), compared to typical binaural listening and (2) the relation between subjective certainty about sound position and localisation accuracy. In two experiments (N = 20 each), participants localised single sounds delivered from one of 60 speakers hidden from view in front space. In each trial, they also provided a subjective rating of their spatial certainty about sound position. No feedback on response was provided. Overall, participants were mostly accurate and certain about sound position in binaural listening, whereas their accuracy and subjective certainty decreased in monaural listening. Interestingly, accuracy and certainty dissociated within single trials during monaural listening: in some trials participants were certain but incorrect, in others they were uncertain but correct. Furthermore, unlike accuracy, subjective certainty rapidly increased as a function of time during the monaural listening block. Finally, subjective certainty changed as a function of perceived location of the sound source. These novel findings reveal that listeners quickly update their subjective confidence on sound position, when they experience an altered listening condition, even in the absence of feedback. Furthermore, they document a dissociation between accuracy and subjective certainty when mapping auditory input to space.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00221-020-05748-4DOI Listing
March 2020

Increased overt attention to objects in early deaf adults: An eye-tracking study of complex naturalistic scenes.

Cognition 2020 01 9;194:104061. Epub 2019 Sep 9.

Center for Mind Brain Sciences, CIMeC, University of Trento, Italy; Dep. of Psychology and Cognitive Science, University of Trento, Italy; Integrative Multisensory Perception Action & Cognition Team, CRNL, France. Electronic address:

The study of selective attention in people with profound deafness has repeatedly documented enhanced attention to the peripheral regions of the visual field compared to hearing controls. This finding emerged from covert attention studies (i.e., without eye-movements) involving extremely simplified visual scenes and comprising few visual items. In this study, we aimed to test whether this key finding extends also to overt attention, using a more ecologically valid experimental context in which complex naturalistic images were presented for 3 s. In Experiment 1 (N = 35), all images contained a single central object superimposed on a congruent naturalistic background (e.g., a tiger in the woods). At the end of the visual exploration phase, an incidental memory task probed the participants' recollection of the seen central objects and image backgrounds. Results showed that hearing controls explored and remembered the image backgrounds more than deaf participants, who lingered on the central object to a greater extent. In Experiment 2 we aimed to disentangle if this behaviour of deaf participants reflected a bias in overt space-based attention towards the centre of the image, or instead, enhanced object-centred attention. We tested new participants (N = 42) in the visual exploration task adding images with lateralized objects, as well as images with multiple object or images without any object. Results confirmed increased exploration of objects in deaf participants. Taken together our novel findings show limitations of the well-known peripheral attention bias of deaf people and suggest that visual object-centred attention may also change after prolonged auditory deprivation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cognition.2019.104061DOI Listing
January 2020

Spatial Cues Influence Time Estimations in Deaf Individuals.

iScience 2019 Sep 31;19:369-377. Epub 2019 Jul 31.

U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy.

Recent studies have reported a strong interaction between spatial and temporal representation when visual experience is missing: blind people use temporal representation of events to represent spatial metrics. Given the superiority of audition on time perception, we hypothesized that when audition is not available complex temporal representations could be impaired, and spatial representation of events could be used to build temporal metrics. To test this hypothesis, deaf and hearing subjects were tested with a visual temporal task where conflicting and not conflicting spatiotemporal information was delivered. As predicted, we observed a strong deficit of deaf participants when only temporal cues were useful and space was uninformative with respect to time. However, the deficit disappeared when coherent spatiotemporal cues were presented and increased for conflicting spatiotemporal stimuli. These results highlight that spatial cues influence time estimations in deaf participants, suggesting that deaf individuals use spatial information to infer temporal environmental coordinates.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.isci.2019.07.042DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6702436PMC
September 2019

Environmental Learning of Social Cues: Evidence From Enhanced Gaze Cueing in Deaf Children.

Child Dev 2019 09 12;90(5):1525-1534. Epub 2019 Jul 12.

University of Trento.

The susceptibility to gaze cueing in deaf children aged 7-14 years old (N = 16) was tested using a nonlinguistic task. Participants performed a peripheral shape-discrimination task, whereas uninformative central gaze cues validly or invalidly cued the location of the target. To assess the role of sign language experience and bilingualism in deaf participants, three groups of age-matched hearing children were recruited: bimodal bilinguals (vocal and sign-language, N = 19), unimodal bilinguals (two vocal languages, N = 17), and monolinguals (N = 14). Although all groups showed a gaze-cueing effect and were faster to respond to validly than invalidly cued targets, this effect was twice as large in deaf participants. This result shows that atypical sensory experience can tune the saliency of a fundamental social cue.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/cdev.13284DOI Listing
September 2019

Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm.

Sci Rep 2019 05 27;9(1):7892. Epub 2019 May 27.

Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy.

Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-019-44267-3DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6536515PMC
May 2019

The role of eye movements in manual responses to social and nonsocial cues.

Atten Percept Psychophys 2019 Jul;81(5):1236-1252

Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy.

Gaze and arrow cues cause covert attention shifts even when they are uninformative. Nonetheless, it is unclear to what extent oculomotor behavior influences manual responses to social and nonsocial stimuli. In two experiments, we tracked the gaze of participants during the cueing task with nonpredictive gaze and arrow cues. In Experiment 1, the discrimination task was easy and eye movements were not necessary, whereas in Experiment 2 they were instrumental in identifying the target. Validity effects on manual response time (RT) were similar for the two cues in Experiment 1 and in Experiment 2, though in the presence of eye movements observers were overall slower to respond to the arrow cue compared with the gaze cue. Cue direction had an effect on saccadic performance before the discrimination was presented and throughout the duration of the trial. Furthermore, we found evidence of a distinct impact of the type of cue on diverse oculomotor components. While saccade latencies were affected by the type of cue, both before and after the target onset, saccade landing positions were not. Critically, the manual validity effect was predicted by the landing position of the initial eye movement. This work suggests that the relationship between eye movements and attention is not straightforward. In the presence of overt selection, saccade latency related to the overall speed of manual response, while eye movements landing position was closely related to manual performance in response to different cues.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13414-019-01669-9DOI Listing
July 2019

Thinner than yourself: self-serving bias in body size estimation.

Psychol Res 2020 Jun 22;84(4):932-949. Epub 2018 Nov 22.

Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto (Trento), Italy.

The self-serving bias is the tendency to consider oneself in unrealistically positive terms. This phenomenon has been documented for body attractiveness, but it remains unclear to what extent it can also emerge for own body size perception. In the present study, we examined this issue in healthy young adults (45 females and 40 males), using two body size estimation (BSE) measures and taking into account inter-individual differences in eating disorder risk. Participants observed pictures of avatars, built from whole body photos of themselves or an unknown other matched for gender. Avatars were parametrically distorted along the thinness-heaviness dimension, and individualised by adding the head of the self or the other. In the first BSE task, participants indicated in each trial whether the seen avatar was thinner or fatter than themselves (or the other). In the second BSE task, participants chose the best representative body size for self and other from a set of avatars. Greater underestimation for self than other body size emerged in both tasks, comparably for women and men. Thinner bodies were also judged as more attractive, in line with standard of beauty in modern western society. Notably, this self-serving bias in BSE was stronger in people with low eating disorder risk. In sum, positive attitudes towards the self can extend to body size estimation in young adults, making own body size closer to the ideal body. We propose that this bias could play an adaptive role in preserving a positive body image.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00426-018-1119-zDOI Listing
June 2020

Incongruent multisensory stimuli alter bodily self-consciousness: Evidence from a first-person perspective experience.

Acta Psychol (Amst) 2018 Nov 22;191:261-270. Epub 2018 Oct 22.

Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), France; Department of Psychology and Cognitive Science, University of Trento, Italy.

In our study, we aimed to reduce bodily self-consciousness using a multisensory illusion (MI), and tested whether this manipulation increases Self-objectification (the psychological attitude to perceive one's own body as an object). Participants observed their own body from a first-person perspective, through a head-mounted display, while receiving incongruent (or congruent) visuo-tactile stimulation on their abdomen or arms. Results showed stronger feelings of disownership, loss of agency and sensation of being out of ones' own body during incongruent compare to congruent stimulation. This reduced bodily self-consciousness did not affect Self-objectification. However, self-objectification (as measured by the appearance of control beliefs subscale of the Objectified Body Consciousness questionnaire) was positively correlated with the MI strength. Moreover, we investigated the impact of MI and Self-objectification on body size estimation. We found systematic body size underestimation, irrespective of type of stimulation or tendency to Self-objectification. These results document a simple yet effective approach to alter bodily self-consciousness, which however spare Self-objectification and body size-perception.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.actpsy.2018.09.009DOI Listing
November 2018

Action Planning Modulates Peripersonal Space.

J Cogn Neurosci 2019 08 15;31(8):1141-1154. Epub 2018 Oct 15.

INSERM U1028, CNRS U5292, Lyon, France.

Peripersonal space is a multisensory representation relying on the processing of tactile and visual stimuli presented on and close to different body parts. The most studied peripersonal space representation is perihand space (PHS), a highly plastic representation modulated following tool use and by the rapid approach of visual objects. Given these properties, PHS may serve different sensorimotor functions, including guidance of voluntary actions such as object grasping. Strong support for this hypothesis would derive from evidence that PHS plastic changes occur before the upcoming movement rather than after its initiation, yet to date, such evidence is scant. Here, we tested whether action-dependent modulation of PHS, behaviorally assessed via visuotactile perception, may occur before an overt movement as early as the action planning phase. To do so, we probed tactile and visuotactile perception at different time points before and during the grasping action. Results showed that visuotactile perception was more strongly affected during the planning phase (250 msec after vision of the target) than during a similarly static but earlier phase (50 msec after vision of the target). Visuotactile interaction was also enhanced at the onset of hand movement, and it further increased during subsequent phases of hand movement. Such a visuotactile interaction featured interference effects during all phases from action planning onward as well as a facilitation effect at the movement onset. These findings reveal that planning to grab an object strengthens the multisensory interaction of visual information from the target and somatosensory information from the hand. Such early updating of the visuotactile interaction reflects multisensory processes supporting motor planning of actions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/jocn_a_01349DOI Listing
August 2019

Affective vocalizations influence body ownership as measured in the rubber hand illusion.

PLoS One 2017 5;12(10):e0186009. Epub 2017 Oct 5.

Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.

Emotional signals, like threatening sounds, automatically ready the perceiver to prepare an appropriate defense behavior. Conjecturing that this would manifest itself in extending the safety zone around the body we used the rubber hand illusion (RHI) to test this prediction. The RHI is a perceptual illusion in which body ownership is manipulated by synchronously stroking a rubber hand and real hand occluded from view. Many factors, both internal and external, have been shown to influence the strength of the illusion, yet the effect of emotion perception on body ownership remains unexplored. We predicted that listening to affective vocalizations would influence how strongly participants experience the RHI. In the first experiment four groups were tested that listened either to affective sounds (angry or happy vocalizations), non-vocal sounds or no sound while undergoing synchronous or asynchronous stroking of the real and rubber hand. In a second experiment three groups were tested comparing angry or neutral vocalizations and no sound condition. There was a significantly larger drift towards the rubber hand in the emotion versus the no emotion conditions. We interpret these results in the framework that the spatial increase in the RHI indicates that under threat the body has the capacity to extend its safety zone.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0186009PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5628997PMC
November 2017

Multisensory Interference in Early Deaf Adults.

J Deaf Stud Deaf Educ 2017 Oct;22(4):422-433

Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31, Rovereto TN 38068, Italy.

Multisensory interactions in deaf cognition are largely unexplored. Unisensory studies suggest that behavioral/neural changes may be more prominent for visual compared to tactile processing in early deaf adults. Here we test whether such an asymmetry results in increased saliency of vision over touch during visuo-tactile interactions. About 23 early deaf and 25 hearing adults performed two consecutive visuo-tactile spatial interference tasks. Participants responded either to the elevation of the tactile target while ignoring a concurrent visual distractor at central or peripheral locations (respond to touch/ignore vision), or they performed the opposite task (respond to vision/ignore touch). Multisensory spatial interference emerged in both tasks for both groups. Crucially, deaf participants showed increased interference compared to hearing adults when they attempted to respond to tactile targets and ignore visual distractors, with enhanced difficulties with ipsilateral visual distractors. Analyses on task-order revealed that in deaf adults, interference of visual distractors on tactile targets was much stronger when this task followed the task in which vision was behaviorally relevant (respond to vision/ignore touch). These novel results suggest that behavioral/neural changes related to early deafness determine enhanced visual dominance during visuo-tactile multisensory conflict.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/deafed/enx025DOI Listing
October 2017

Functional selectivity for face processing in the temporal voice area of early deaf individuals.

Proc Natl Acad Sci U S A 2017 08 26;114(31):E6437-E6446. Epub 2017 Jun 26.

Center for Mind/Brain Studies, University of Trento, 38123 Trento, Italy;

Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1073/pnas.1618287114DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5547585PMC
August 2017

Concurrent use of somatotopic and external reference frames in a tactile mislocalization task.

Brain Cogn 2017 02 2;111:25-33. Epub 2016 Nov 2.

MEG-Centre, University of Tübingen, Germany; Centre for Mind/Brain Sciences, University of Trento, Rovereto, Italy; Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy; Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany.

Localizing tactile stimuli on our body requires sensory information to be represented in multiple frames of reference along the sensory pathways. These reference frames include the representation of sensory information in skin coordinates, in which the spatial relationship of skin regions is maintained. The organization of the primary somatosensory cortex matches such somatotopic reference frame. In contrast, higher-order representations are based on external coordinates, in which body posture and gaze direction are taken into account in order to localise touch in other meaningful ways according to task demands. Dominance of one representation or the other, or the use of multiple representations with different weights, is thought to depend on contextual factors of cognitive and/or sensory origins. However, it is unclear under which situations a reference frame takes over another or when different reference frames are jointly used at the same time. The study of tactile mislocalizations at the fingers has shown a key role of the somatotopic frame of reference, both when touches are delivered unilaterally to a single hand, and when they are delivered bilaterally to both hands. Here, we took advantage of a well-established tactile mislocalization paradigm to investigate whether the reference frame used to integrate bilateral tactile stimuli can change as a function of the spatial relationship between the two hands. Specifically, supra-threshold interference stimuli were applied to the index or little fingers of the left hand 200ms prior to the application of a test stimulus on a finger of the right hand. Crucially, different hands postures were adopted (uncrossed or crossed). Results show that introducing a change in hand-posture triggered the concurrent use of somatotopic and external reference frames when processing bilateral touch at the fingers. This demonstrates that both somatotopic and external reference frames can be concurrently used to localise tactile stimuli on the fingers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandc.2016.10.005DOI Listing
February 2017

Spatial and non-spatial multisensory cueing in unilateral cochlear implant users.

Hear Res 2017 02 31;344:24-37. Epub 2016 Oct 31.

Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy.

In the present study we examined the integrity of spatial and non-spatial multisensory cueing (MSC) mechanisms in unilateral CI users. We tested 17 unilateral CI users and 17 age-matched normal hearing (NH) controls in an elevation-discrimination task for visual targets delivered at peripheral locations. Visual targets were presented alone (visual-only condition) or together with abrupt sounds that matched or did not match the location of the visual targets (audio-visual conditions). All participants were also tested in simple pointing to free-field sounds task, to obtain a basic measure of their spatial hearing ability in the naturalistic environment in which the experiment was conducted. Hearing controls were tested both in binaural and monaural conditions. NH controls showed spatial MSC benefits (i.e., faster discrimination for visual targets that matched sound cues) both in the binaural and in the monaural hearing conditions. In addition, they showed non-spatial MSC benefits (i.e., faster discrimination responses in audio-visual conditions compared to visual-only conditions, regardless of sound cue location) in the monaural condition. Monaural CI users showed no spatial MSC benefits, but retained non-spatial MSC benefits comparable to that observed in NH controls tested monaurally. The absence of spatial MSC in CI users likely reflects the poor spatial hearing ability measured in these participants. These findings reveal the importance of studying the impact of CI re-afferentation beyond auditory processing alone, addressing in particular the fundamental mechanisms that serves orienting of multisensory attention in the environment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.heares.2016.10.025DOI Listing
February 2017

The oculomotor salience of flicker, apparent motion and continuous motion in saccade trajectories.

Exp Brain Res 2017 01 28;235(1):181-191. Epub 2016 Sep 28.

Center for Mind/Brain Sciences (CIMeC), University of Trento, Palazzo Fedrigotti, Corso Bettini 31, 38068, Rovereto, TN, Italy.

The aim of the present study was to investigate the impact of dynamic distractors on the time-course of oculomotor selection using saccade trajectory deviations. Participants were instructed to make a speeded eye movement (pro-saccade) to a target presented above or below the fixation point while an irrelevant distractor was presented. Four types of distractors were varied within participants: (1) static, (2) flicker, (3) rotating apparent motion and (4) continuous motion. The eccentricity of the distractor was varied between participants. The results showed that saccadic trajectories curved towards distractors presented near the vertical midline; no reliable deviation was found for distractors presented further away from the vertical midline. Differences between the flickering and rotating distractor were found when distractor eccentricity was small and these specific effects developed over time such that there was a clear differentiation between saccadic deviation based on apparent motion for long-latency saccades, but not short-latency saccades. The present results suggest that the influence on performance of apparent motion stimuli is relatively delayed and acts in a more sustained manner compared to the influence of salient static, flickering and continuous moving stimuli.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00221-016-4779-1DOI Listing
January 2017

Causal Dynamics of Scalp Electroencephalography Oscillation During the Rubber Hand Illusion.

Brain Topogr 2017 01 12;30(1):122-135. Epub 2016 Sep 12.

Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy.

Rubber hand illusion (RHI) is an important phenomenon for the investigation of body ownership and self/other distinction. The illusion is promoted by the spatial and temporal contingencies of visual inputs near a fake hand and physical touches to the real hand. The neural basis of this phenomenon is not fully understood. We hypothesized that the RHI is associated with a fronto-parietal circuit, and the goal of this study was to determine the dynamics of neural oscillation associated with this phenomenon. We measured electroencephalography while delivering spatially congruent/incongruent visuo-tactile stimulations to fake and real hands. We applied time-frequency analyses and calculated renormalized partial directed coherence (rPDC) to examine cortical dynamics during the bodily illusion. When visuo-tactile stimulation was spatially congruent, and the fake and real hands were aligned, we observed a reduced causal relationship from the medial frontal to the parietal regions with respect to baseline, around 200 ms post-stimulus. This change in rPDC was negatively correlated with a subjective report of the RHI intensity. Moreover, we observed a link between the proprioceptive drift and an increased causal relationship from the parietal cortex to the right somatosensory cortex during a relatively late period (550-750 ms post-stimulus). These findings suggest a two-stage process in which (1) reduced influence from the medial frontal regions over the parietal areas unlocks the mechanisms that preserve body integrity, allowing RHI to emerge; and (2) information processed at the parietal cortex is back-projected to the somatosensory cortex contralateral to the real hand, inducing proprioceptive drift.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10548-016-0519-xDOI Listing
January 2017

Bilateral representations of touch in the primary somatosensory cortex.

Cogn Neuropsychol 2016 Feb-Mar;33(1-2):48-66. Epub 2016 Jun 17.

d Center for Mind/Brain Sciences, University of Trento , Rovereto , Italy.

According to current textbook knowledge, the primary somatosensory cortex (SI) supports unilateral tactile representations, whereas structures beyond SI, in particular the secondary somatosensory cortex (SII), support bilateral tactile representations. However, dexterous and well-coordinated bimanual motor tasks require early integration of bilateral tactile information. Sequential processing, first of unilateral and subsequently of bilateral sensory information, might not be sufficient to accomplish these tasks. This view of sequential processing in the somatosensory system might therefore be questioned, at least for demanding bimanual tasks. Evidence from the last 15 years is forcing a revision of this textbook notion. Studies in animals and humans indicate that SI is more than a simple relay for unilateral sensory information and, together with SII, contributes to the integration of somatosensory inputs from both sides of the body. Here, we review a series of recent works from our own and other laboratories in favour of interactions between tactile stimuli on the two sides of the body at early stages of processing. We focus on tactile processing, although a similar logic may also apply to other aspects of somatosensation. We begin by describing the basic anatomy and physiology of interhemispheric transfer, drawing on neurophysiological studies in animals and behavioural studies in humans that showed tactile interactions between body sides, both in healthy and in brain-damaged individuals. Then we describe the neural substrates of bilateral interactions in somatosensation as revealed by neurophysiological work in animals and neuroimaging studies in humans (i.e., functional magnetic resonance imaging, magnetoencephalography, and transcranial magnetic stimulation). Finally, we conclude with considerations on the dilemma of how efficiently integrating bilateral sensory information at early processing stages can coexist with more lateralized representations of somatosensory input, in the context of motor control.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/02643294.2016.1159547DOI Listing
April 2017

Attentional orienting to social and nonsocial cues in early deaf adults.

J Exp Psychol Hum Percept Perform 2015 Dec 17;41(6):1758-71. Epub 2015 Aug 17.

Center for Mind Brain Sciences (CIMeC), University of Trento.

In 2 experiments we investigated attentional orienting to nonpredictive social and nonsocial cues in deaf observers. In Experiment 1a, 22 early deaf adults and 23 hearing controls performed a peripheral shape-discrimination task, while uninformative central gaze cues validly and invalidly cued the location of the target. As an adaptation to the lack of audition, we expected deaf adults to show a larger impact of gaze cuing on attentional orienting compared with hearing controls. However, contrary to our predictions, deaf participants did not respond faster to cued compared with uncued targets (gaze-cuing effect; GCE), and this behavior partly correlated with early sign language acquisition. Experiment 1b showed a reliable GCE in 13 hearing native signers, thus excluding a key role of early sign language acquisition in explaining the lack of GCE in the response times of deaf participants. To test whether the resistance to uninformative central cues extends to nonsocial cues, in Experiment 2 nonpredictive arrow cues were presented to 14 deaf and 14 hearing participants. Both groups of participants showed a comparable arrow-cuing effect. Together, our findings suggest that deafness may selectively limit attentional-orienting triggered by central irrelevant gaze cues. Possible implications for plasticity related to deafness are discussed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1037/xhp0000099DOI Listing
December 2015

The multisensory body revealed through its cast shadows.

Front Psychol 2015 19;6:666. Epub 2015 May 19.

Department of Developmental and Social Psychology, University of Padua , Padua, Italy ; Center for Cognitive Neuroscience, University of Padua , Padua, Italy.

One key issue when conceiving the body as a multisensory object is how the cognitive system integrates visible instances of the self and other bodies with one's own somatosensory processing, to achieve self-recognition and body ownership. Recent research has strongly suggested that shadows cast by our own body have a special status for cognitive processing, directing attention to the body in a fast and highly specific manner. The aim of the present article is to review the most recent scientific contributions addressing how body shadows affect both sensory/perceptual and attentional processes. The review examines three main points: (1) body shadows as a special window to investigate the construction of multisensory body perception; (2) experimental paradigms and related findings; (3) open questions and future trajectories. The reviewed literature suggests that shadows cast by one's own body promote binding between personal and extrapersonal space and elicit automatic orienting of attention toward the body-part casting the shadow. Future research should address whether the effects exerted by body shadows are similar to those observed when observers are exposed to other visual instances of their body. The results will further clarify the processes underlying the merging of vision and somatosensation when creating body representations.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fpsyg.2015.00666DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4436799PMC
June 2015

Somatotopy and temporal dynamics of sensorimotor interactions: evidence from double afferent inhibition.

Eur J Neurosci 2015 May 16;41(11):1459-65. Epub 2015 Apr 16.

INSERM U1028, CNRS UMR5292, ImpAct Team, Lyon Neuroscience Research Centre, Lyon, France.

Moving and interacting with the world requires that the sensory and motor systems share information, but while some information about tactile events is preserved during sensorimotor transfer the spatial specificity of this information is unknown. Afferent inhibition (AI) studies, in which corticospinal excitability (CSE) is inhibited when a single tactile stimulus is presented before a transcranial magnetic stimulation pulse over the motor cortex, offer contradictory results regarding the sensory-to-motor transfer of spatial information. Here, we combined the techniques of AI and tactile repetition suppression (the decreased neurophysiological response following double stimulation of the same vs. different fingers) to investigate whether topographic information is preserved in the sensory-to-motor transfer in humans. We developed a double AI paradigm to examine both spatial (same vs. different finger) and temporal (short vs. long delay) aspects of sensorimotor interactions. Two consecutive electrocutaneous stimuli (separated by either 30 or 125 ms) were delivered to either the same or different fingers on the left hand (i.e. index finger stimulated twice or middle finger stimulated before index finger). Information about which fingers were stimulated was reflected in the size of the motor responses in a time-constrained manner: CSE was modulated differently by same and different finger stimulation only when the two stimuli were separated by the short delay (P = 0.004). We demonstrate that the well-known response of the somatosensory cortices following repetitive stimulation is mirrored in the motor cortex and that CSE is modulated as a function of the temporal and spatial relationship between afferent stimuli.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/ejn.12890DOI Listing
May 2015

Finding the balance between capture and control: Oculomotor selection in early deaf adults.

Brain Cogn 2015 Jun 29;96:12-27. Epub 2015 Mar 29.

Center for Mind Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences, University of Trento, Italy.

Previous work investigating the consequence of bilateral deafness on attentional selection suggests that experience-dependent changes in this population may result in increased automatic processing of stimulus-driven visual information (e.g., saliency). However, adaptive behavior also requires observers to prioritize goal-driven information relevant to the task at hand. In order to investigate whether auditory deprivation alters the balance between these two components of attentional selection, we assessed the time-course of overt visual selection in deaf adults. Twenty early-deaf adults and twenty hearing controls performed an oculomotor additional singleton paradigm. Participants made a speeded eye-movement to a unique orientation target, embedded among homogenous non-targets and one additional unique orientation distractor that was more, equally or less salient than the target. Saliency was manipulated through color. For deaf participants proficiency in sign language was assessed. Overall, results showed that fast initiated saccades were saliency-driven, whereas later initiated saccades were goal-driven. However, deaf participants were overall slower than hearing controls at initiating saccades and also less captured by task-irrelevant salient distractors. The delayed oculomotor behavior of deaf adults was not explained by any of the linguistic measures acquired. Importantly, a multinomial model applied to the data revealed a comparable evolution over time of the underlying saliency- and goal-driven processes between the two groups, confirming the crucial role of saccadic latencies in determining the outcome of visual selection performance. The present findings indicate that prioritization of saliency-driven information is not an unavoidable phenomenon in deafness. Possible neural correlates of the documented behavioral effect are also discussed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandc.2015.03.001DOI Listing
June 2015

With or without semantic mediation: retrieval of lexical representations in sign production.

J Deaf Stud Deaf Educ 2015 Apr 1;20(2):163-71. Epub 2015 Jan 1.

Università di Padova.

How are lexical representations retrieved during sign production? Similar to spoken languages, lexical representation in sign language must be accessed through semantics when naming pictures. However, it remains an open issue whether lexical representations in sign language can be accessed via routes that bypass semantics when retrieval is elicited by written words. Here we address this issue by exploring under which circumstances sign retrieval is sensitive to semantic context. To this end we replicate in sign language production the cumulative semantic cost: The observation that naming latencies increase monotonically with each additional within-category item that is named in a sequence of pictures. In the experiment reported here, deaf participants signed sequences of pictures or signed sequences of Italian written words using Italian Sign Language. The results showed a cumulative semantic cost in picture naming but, strikingly, not in word naming. This suggests that only picture naming required access to semantics, whereas deaf signers accessed the sign language lexicon directly (i.e., bypassing semantics) when naming written words. The implications of these findings for the architecture of the sign production system are discussed in the context of current models of lexical access in spoken language production.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/deafed/enu045DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4810805PMC
April 2015

Early integration of bilateral touch in the primary somatosensory cortex.

Hum Brain Mapp 2015 Apr 16;36(4):1506-23. Epub 2014 Dec 16.

Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy.

Animal, as well as behavioural and neuroimaging studies in humans have documented integration of bilateral tactile information at the level of primary somatosensory cortex (SI). However, it is still debated whether integration in SI occurs early or late during tactile processing, and whether it is somatotopically organized. To address both the spatial and temporal aspects of bilateral tactile processing we used magnetoencephalography in a tactile repetition-suppression paradigm. We examined somatosensory evoked-responses produced by probe stimuli preceded by an adaptor, as a function of the relative position of adaptor and probe (probe always at the left index finger; adaptor at the index or middle finger of the left or right hand) and as a function of the delay between adaptor and probe (0, 25, or 125 ms). Percentage of response-amplitude suppression was computed by comparing paired (adaptor + probe) with single stimulations of adaptor and probe. Results show that response suppression varies differentially in SI and SII as a function of both spatial and temporal features of the stimuli. Remarkably, repetition suppression of SI activity emerged early in time, regardless of whether the adaptor stimulus was presented on the same and the opposite body side with respect to the probe. These novel findings support the notion of an early and somatotopically organized inter-hemispheric integration of tactile information in SI.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hbm.22719DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6869154PMC
April 2015

From body shadows to bodily attention: automatic orienting of tactile attention driven by cast shadows.

Conscious Cogn 2014 Oct 10;29:56-67. Epub 2014 Aug 10.

Department of Developmental and Social Psychology, University of Padua, Italy; Center for Cognitive Neuroscience, University of Padua, Italy.

Body shadows orient attention to the body-part casting the shadow. We have investigated the automaticity of this phenomenon, by addressing its time-course and its resistance to contextual manipulations. When targets were tactile stimuli at the hands (Exp.1) or visual stimuli near the body-shadow (Exp.2), cueing effects emerged regardless of the delay between shadow and target onset (100, 600, 1200, 2400ms). This suggests a fast and sustained attention orienting to body-shadows, that involves both the space occupied by shadows (extra-personal space) and the space the shadow refers to (own body). When target type became unpredictable (tactile or visual), shadow-cueing effects remained robust only for tactile targets, as visual stimuli showed no overall reliable effects, regardless of whether they occurred near the shadow (Exp.3) or near the body (Exp.4). We conclude that mandatory attention shifts triggered by body-shadows are limited to tactile targets and, instead, are less automatic for visual stimuli.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.concog.2014.07.006DOI Listing
October 2014

Stimulus- and goal-driven control of eye movements: action videogame players are faster but not better.

Atten Percept Psychophys 2014 Nov;76(8):2398-412

Center for Mind Brain Sciences (CIMeC), University of Trento, Trento, Italy,

Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down strategies start to control performance, has rarely been investigated in AVGPs. Here, we addressed specifically this issue through an oculomotor additional-singleton paradigm. Participants were instructed to make a saccadic eye movement to a unique orientation singleton. The target was presented among homogeneous nontargets and one additional orientation singleton that was more, equally, or less salient than the target. Saliency was manipulated in the color dimension. Our results showed similar patterns of performance for both AVGPs and NVGPs: Fast-initiated saccades were saliency-driven, whereas later-initiated saccades were more goal-driven. However, although AVGPs were faster than NVGPs, they were also less accurate. Importantly, a multinomial model applied to the data revealed comparable underlying saliency-driven and goal-driven functions for the two groups. Taken together, the observed differences in performance are compatible with the presence of a lower decision bound for releasing saccades in AVGPs than in NVGPs, in the context of comparable temporal interplay between the underlying attentional mechanisms. In sum, the present findings show that in both AVGPs and NVGPs, the implementation of top-down control in visual selection takes time to come about, and they argue against the idea of a general enhancement of top-down control in AVGPs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13414-014-0736-xDOI Listing
November 2014

Visual change detection recruits auditory cortices in early deafness.

Neuroimage 2014 Jul 14;94:172-184. Epub 2014 Mar 14.

Department of Psychology and Cognitive Sciences, University of Trento, Italy; Center for Mind/Brain Sciences, University of Trento, Italy.

Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2014.02.031DOI Listing
July 2014

Response speed advantage for vision does not extend to touch in early deaf adults.

Exp Brain Res 2014 Apr 31;232(4):1335-41. Epub 2014 Jan 31.

Center for Mind Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, Italy,

Early deaf adults typically respond faster than hearing controls when performing a speeded simple detection on visual targets. Whether this response time advantage can generalise to another intact modality (touch) or it is instead specific to visual processing remained unexplored. We tested eight early deaf adults and twelve hearing controls in a simple detection task, with visual or tactile targets delivered on the arms and occupying the same locations in external space. Catch trials were included in the experimental paradigm. Results revealed a response time advantage in deaf adults compared to hearing controls, selectively for visual targets. This advantage did not extend to touch. The number of anticipation errors was negligible and comparable in both groups. The present findings strengthen the notion that response time advantage in deaf adults emerges as a consequence of changes specific to visual processing. They also exclude the involvement of sensory-unspecific cognitive mechanisms in this improvement (e.g. increased impulsivity in initiation of response, longer-lasting sustained attention or higher motivation to perform the task). Finally, they provide initial evidence that the intact sensory modalities can reorganise independently from each other following early auditory deprivation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00221-014-3852-xDOI Listing
April 2014