Publications by authors named "Benjamin de Haas"

24 Publications

  • Page 1 of 1

Practice modality of motor sequences impacts the neural signature of motor imagery.

Sci Rep 2020 11 5;10(1):19176. Epub 2020 Nov 5.

Institute of Sport Sciences, Goethe University Frankfurt, Ginnheimer Landstrasse 39, 60487, Frankfurt am Main, Germany.

Motor imagery is conceptualized as an internal simulation that uses motor-related parts of the brain as its substrate. Many studies have investigated this sharing of common neural resources between the two modalities of motor imagery and motor execution. They have shown overlapping but not identical activation patterns that thereby result in a modality-specific neural signature. However, it is not clear how far this neural signature depends on whether the imagined action has previously been practiced physically or only imagined. The present study aims to disentangle whether the neural imprint of an imagined manual pointing sequence within cortical and subcortical motor areas is determined by the nature of this prior practice modality. Each participant practiced two sequences physically, practiced two other sequences mentally, and did a behavioural pre-test without any further practice on a third pair of sequences. After a two-week practice intervention, participants underwent fMRI scans while imagining all six sequences. Behavioural data demonstrated practice-related effects as well as very good compliance with instructions. Functional MRI data confirmed the previously known motor imagery network. Crucially, we found that mental and physical practice left a modality-specific footprint during mental motor imagery. In particular, activation within the right posterior cerebellum was stronger when the imagined sequence had previously been practiced physically. We conclude that cerebellar activity is shaped specifically by the nature of the prior practice modality.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-76214-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7645615PMC
November 2020

OSIEshort: A small stimulus set can reliably estimate individual differences in semantic salience.

J Vis 2020 09;20(9):13

Experimental Psychology, Justus Liebig Universität, Giessen, Germany.

Recent findings revealed consistent individual differences in fixation tendencies among observers free-viewing complex scenes. The present study aimed at (1) replicating these differences, and (2) testing whether they can be estimated using a shorter test. In total, 103 participants completed two eye-tracking sessions. The first session was a direct replication of the original study, but the second session used a smaller subset of images, optimized to capture individual differences efficiently. The first session replicated the large and consistent individual differences along five semantic dimensions observed in the original study. The second session showed that these differences can be estimated using about 40 to 100 images (depending on the tested dimension). Additional analyses revealed that only the first 2 seconds of viewing duration seem to be informative regarding these differences. Taken together, our findings suggest that reliable individual differences in semantic salience can be estimated with a test totaling less than 2 minutes of viewing duration.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1167/jov.20.9.13DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7509791PMC
September 2020

Neural correlates of top-down modulation of haptic shape versus roughness perception.

Hum Brain Mapp 2019 12 20;40(18):5172-5184. Epub 2019 Aug 20.

Department of Experimental Psychology, Justus Liebig University, Giessen, Germany.

Exploring an object's shape by touch also renders information about its surface roughness. It has been suggested that shape and roughness are processed distinctly in the brain, a result based on comparing brain activation when exploring objects that differed in one of these features. To investigate the neural mechanisms of top-down control on haptic perception of shape and roughness, we presented the same multidimensional objects but varied the relevance of each feature. Specifically, participants explored two objects that varied in shape (oblongness of cuboids) and surface roughness. They either had to compare the shape or the roughness in an alternative-forced-choice-task. Moreover, we examined whether the activation strength of the identified brain regions as measured by functional magnetic resonance imaging (fMRI) can predict the behavioral performance in the haptic discrimination task. We observed a widespread network of activation for shape and roughness perception comprising bilateral precentral and postcentral gyrus, cerebellum, and insula. Task-relevance of the object's shape increased activation in the right supramarginal gyrus (SMG/BA 40) and the right precentral gyrus (PreCG/BA 44) suggesting that activation in these areas does not merely reflect stimulus-driven processes, such as exploring shape, but also entails top-down controlled processes driven by task-relevance. Moreover, the strength of the SMG/PreCG activation predicted individual performance in the shape but not in the roughness discrimination task. No activation was found for the reversed contrast (roughness > shape). We conclude that macrogeometric properties, such as shape, can be modulated by top-down mechanisms whereas roughness, a microgeometric feature, seems to be processed automatically.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hbm.24764DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6864886PMC
December 2019

Individual differences in visual salience vary along semantic dimensions.

Proc Natl Acad Sci U S A 2019 06 28;116(24):11687-11692. Epub 2019 May 28.

Department of Psychology, Justus Liebig Universität, 35394 Giessen, Germany.

What determines where we look? Theories of attentional guidance hold that image features and task demands govern fixation behavior, while differences between observers are interpreted as a "noise-ceiling" that strictly limits predictability of fixations. However, recent twin studies suggest a genetic basis of gaze-trace similarity for a given stimulus. This leads to the question of how individuals differ in their gaze behavior and what may explain these differences. Here, we investigated the fixations of >100 human adults freely viewing a large set of complex scenes containing thousands of semantically annotated objects. We found systematic individual differences in fixation frequencies along six semantic stimulus dimensions. These differences were large (>twofold) and highly stable across images and time. Surprisingly, they also held for first fixations directed toward each image, commonly interpreted as "bottom-up" visual salience. Their perceptual relevance was documented by a correlation between individual face salience and face recognition skills. The set of reliable individual salience dimensions and their covariance pattern replicated across samples from three different countries, suggesting they reflect fundamental biological mechanisms of attention. Our findings show stable individual differences in salience along a set of fundamental semantic dimensions and that these differences have meaningful perceptual implications. Visual salience reflects features of the observer as well as the image.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1073/pnas.1820553116DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6576124PMC
June 2019

Subjective vividness of motor imagery has a neural signature in human premotor and parietal cortex.

Neuroimage 2019 08 30;197:273-283. Epub 2019 Apr 30.

Neuromotor Behavior Laboratory, Institute of Sport Sciences, Justus Liebig University Giessen, Germany; Bender Institute of Neuroimaging, Justus Liebig University Giessen, Germany.

Motor imagery (MI) is the process in which subjects imagine executing a body movement with a strong kinesthetic component from a first-person perspective. The individual capacity to elicit such mental images is not universal but varies within and between subjects. Neuroimaging studies have shown that these inter-as well as intra-individual differences in imagery quality mediate the amplitude of neural activity during MI on a group level. However, these analyses were not sensitive to forms of representation that may not map onto a simple modulation of overall amplitude. Therefore, the present study asked how far the subjective impression of motor imagery vividness is reflected by a spatial neural code, and how patterns of neural activation in different motor regions relate to specific imagery impressions. During fMRI scanning, 20 volunteers imagined three different types of right-hand actions. After each imagery trial, subjects were asked to evaluate the perceived vividness of their imagery. A correlation analysis compared the rating differences and neural dissimilarity values of the rating groups separately for each region of interest. Results showed a significant positive correlation in the left vPMC and right IPL, indicating that these regions particularly reflect perceived imagery vividness in that similar rated trials evoke more similar neural patterns. A decoding analysis revealed that the vividness of the motor image related systematically to the action specificity of neural activation patterns in left vPMC and right SPL. Imagined actions accompanied by higher vividness ratings were significantly more distinguishable within these areas. Altogether, results showed that spatial patterns of neural activity within the human motor cortices reflect the individual vividness of imagined actions. Hence, the findings reveal a link between the subjective impression of motor imagery vividness and objective physiological markers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2019.04.073DOI Listing
August 2019

How to Enhance the Power to Detect Brain-Behavior Correlations With Limited Resources.

Authors:
Benjamin de Haas

Front Hum Neurosci 2018 16;12:421. Epub 2018 Oct 16.

Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.

Neuroscience has been diagnosed with a pervasive lack of statistical power and, in turn, reliability. One remedy proposed is a massive increase of typical sample sizes. Parts of the neuroimaging community have embraced this recommendation and actively push for a reallocation of resources toward fewer but larger studies. This is especially true for neuroimaging studies focusing on individual differences to test brain-behavior correlations. Here, I argue for a more efficient solution. simulations show that statistical power crucially depends on the choice of behavioral and neural measures, as well as on sampling strategy. Specifically, behavioral prescreening and the selection of extreme groups can ascertain a high degree of robust in-sample variance. Due to the low cost of behavioral testing compared to neuroimaging, this is a more efficient way of increasing power. For example, prescreening can achieve the power boost afforded by an increase of sample sizes from = 30 to = 100 at ∼5% of the cost. This perspective article briefly presents simulations yielding these results, discusses the strengths and limitations of prescreening and addresses some potential counter-arguments. Researchers can use the accompanying online code to simulate the expected power boost of prescreening for their own studies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2018.00421DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6198725PMC
October 2018

The optimal experimental design for multiple alternatives perceptual search.

Atten Percept Psychophys 2018 Nov;80(8):1962-1973

Experimental Psychology, University College London, 26 Bedford Way, London, UK.

Perceptual bias is inherent to all our senses, particularly in the form of visual illusions and aftereffects. However, many experiments measuring perceptual biases may be susceptible to nonperceptual factors, such as response bias and decision criteria. Here, we quantify how robust multiple alternative perceptual search (MAPS) is for disentangling estimates of perceptual biases from these confounding factors. First, our results show that while there are considerable response biases in our four-alternative forced-choice design, these are unrelated to perceptual biases estimates, and these response biases are not produced by the response modality (keyboard vs. mouse). We also show that perceptual bias estimates are reduced when feedback is given on each trial, likely due to feedback enabling observers to partially (and actively) correct for perceptual biases. However, this does not impact the reliability with which MAPS detects the presence of perceptual biases. Finally, our results show that MAPS can detect actual perceptual biases and is not a decisional bias towards choosing the target in the middle of the candidate stimulus distribution. In summary, researchers conducting a MAPS experiment should use a constant reference stimulus, but consider varying the mean of the candidate distribution. Ideally, they should not employ trial-wise feedback if the magnitude of perceptual biases is of interest.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13414-018-1568-xDOI Listing
November 2018

Feature-location effects in the Thatcher illusion.

J Vis 2018 04;18(4):16

Experimental Psychology, University College London, London, UK.

Face perception is impaired for inverted images, and a prominent example of this is the Thatcher illusion: "Thatcherized" (i.e., rotated) eyes and mouths make a face look grotesque, but only if the whole face is seen upright rather than inverted. Inversion effects are often interpreted as evidence for configural face processing. However, recent findings have led to the alternative proposal that the Thatcher illusion rests on orientation sensitivity for isolated facial regions. Here, we tested whether the Thatcher effect depends not only on the orientation of facial regions but also on their visual-field location. Using a match-to-sample task with isolated eye and mouth regions we found a significant Feature × Location interaction. Observers were better at discriminating Thatcherized from normal eyes in the upper compared to the lower visual field, and vice versa for mouths. These results show that inversion effects can at least partly be driven by nonconfigural factors and that one of these factors is a match between facial features and their typical visual-field location. This echoes recent results showing feature-location effects in face individuation. We discuss the role of these findings for the hypothesis that spatial and feature tuning in the ventral stream are linked.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.4.16DOI Listing
April 2018

Spatially selective responses to Kanizsa and occlusion stimuli in human visual cortex.

Sci Rep 2018 01 12;8(1):611. Epub 2018 Jan 12.

UCL Experimental Psychology, 26 Bedford Way, London, UK.

Early visual cortex responds to illusory contours in which abutting lines or collinear edges imply the presence of an occluding surface, as well as to occluded parts of an object. Here we used functional magnetic resonance imaging (fMRI) and population receptive field (pRF) analysis to map retinotopic responses in early visual cortex using bar stimuli defined by illusory contours, occluded parts of a bar, or subtle luminance contrast. All conditions produced retinotopic responses in early visual field maps even though signal-to-noise ratios were very low. We found that signal-to-noise ratios and coherence with independent high-contrast mapping data increased from V1 to V2 to V3. Moreover, we found no differences of signal-to-noise ratios or pRF sizes between the low-contrast luminance and illusion conditions. We propose that all three conditions mapped spatial attention to the bar location rather than activations specifically related to illusory contours or occlusion.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-017-19121-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5766606PMC
January 2018

Intersession reliability of population receptive field estimates.

Neuroimage 2016 Dec 9;143:293-303. Epub 2016 Sep 9.

Experimental Psychology, University College London, 26 Bedford Way, London, UK; UCL Institute of Cognitive Neuroscience, 17-19 Queen Square, London, UK.

Population receptive field (pRF) analysis is a popular method to infer spatial selectivity of voxels in visual cortex. However, it remains largely untested how stable pRF estimates are over time. Here we measured the intersession reliability of pRF parameter estimates for the central visual field and near periphery, using a combined wedge and ring stimulus containing natural images. Sixteen healthy human participants completed two scanning sessions separated by 10-114 days. Individual participants showed very similar visual field maps for V1-V4 on both sessions. Intersession reliability for eccentricity and polar angle estimates was close to ceiling for most visual field maps (r>.8 for V1-3). PRF size and cortical magnification (CMF) estimates showed strong but lower overall intersession reliability (r≈.4-.6). Group level results for pRF size and CMF were highly similar between sessions. Additional control experiments confirmed that reliability does not depend on the carrier stimulus used and that reliability for pRF size and CMF is high for sessions acquired on the same day (r>.6). Our results demonstrate that pRF mapping is highly reliable across sessions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2016.09.013DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5139984PMC
December 2016

Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations.

J Neurosci 2016 09;36(36):9289-302

Institute of Cognitive Neuroscience, Wellcome Trust Centre for Neuroimaging.

Unlabelled: Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia.

Significance Statement: Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.4131-14.2016DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5013182PMC
September 2016

Imagined and Executed Actions in the Human Motor System: Testing Neural Similarity Between Execution and Imagery of Actions with a Multivariate Approach.

Cereb Cortex 2017 09;27(9):4523-4536

Institute for Sports Science, Justus Liebig University Giessen, Giessen, 35394, Germany.

Simulation theory proposes motor imagery (MI) to be a simulation based on representations also used for motor execution (ME). Nonetheless, it is unclear how far they use the same neural code. We use multivariate pattern analysis (MVPA) and representational similarity analysis (RSA) to describe the neural representations associated with MI and ME within the frontoparietal motor network. During functional magnetic resonance imaging scanning, 20 volunteers imagined or executed 3 different types of right-hand actions. Results of MVPA showed that these actions as well as their modality (MI or ME) could be decoded significantly above chance from the spatial patterns of BOLD signals in premotor and posterior parietal cortices. This was also true for cross-modal decoding. Furthermore, representational dissimilarity matrices of frontal and parietal areas showed that MI and ME representations formed separate clusters, but that the representational organization of action types within these clusters was identical. For most ROIs, this pattern of results best fits with a model that assumes a low-to-moderate degree of similarity between the neural patterns associated with MI and ME. Thus, neural representations of MI and ME are neither the same nor totally distinct but exhibit a similar structural geometry with respect to different types of action.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bhw257DOI Listing
September 2017

Cortical idiosyncrasies predict the perception of object size.

Nat Commun 2016 06 30;7:12110. Epub 2016 Jun 30.

Experimental Psychology, University College London, 26 Bedford Way, London WC1H 0AP, UK.

Perception is subjective. Even basic judgments, like those of visual object size, vary substantially between observers and also across the visual field within the same observer. The way in which the visual system determines the size of objects remains unclear, however. We hypothesize that object size is inferred from neuronal population activity in V1 and predict that idiosyncrasies in cortical functional architecture should therefore explain individual differences in size judgments. Here we show results from novel behavioural methods and functional magnetic resonance imaging (fMRI) demonstrating that biases in size perception are correlated with the spatial tuning of neuronal populations in healthy volunteers. To explain this relationship, we formulate a population read-out model that directly links the spatial distribution of V1 representations to our perceptual experience of visual size. Taken together, our results suggest that the individual perception of simple stimuli is warped by idiosyncrasies in visual cortical organization.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/ncomms12110DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4931347PMC
June 2016

Attention and multisensory modulation argue against total encapsulation.

Behav Brain Sci 2016 Jan;39:e237

Institute of Cognitive Neuroscience,University College London,WC1N 3AR London,United

Firestone & Scholl (F&S) postulate that vision proceeds without any direct interference from cognition. We argue that this view is extreme and not in line with the available evidence. Specifically, we discuss two well-established counterexamples: Attention directly affects core aspects of visual processing, and multisensory modulations of vision originate on multiple levels, some of which are unlikely to fall "within perception."
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1017/S0140525X1500254XDOI Listing
January 2016

Motor imagery of hand actions: Decoding the content of motor imagery from brain activity in frontal and parietal motor areas.

Hum Brain Mapp 2016 Jan 9;37(1):81-93. Epub 2015 Oct 9.

Bender Institute of Neuroimaging, Justus Liebig University Giessen, Germany.

How motor maps are organized while imagining actions is an intensely debated issue. It is particularly unclear whether motor imagery relies on action-specific representations in premotor and posterior parietal cortices. This study tackled this issue by attempting to decode the content of motor imagery from spatial patterns of Blood Oxygen Level Dependent (BOLD) signals recorded in the frontoparietal motor imagery network. During fMRI-scanning, 20 right-handed volunteers worked on three experimental conditions and one baseline condition. In the experimental conditions, they had to imagine three different types of right-hand actions: an aiming movement, an extension-flexion movement, and a squeezing movement. The identity of imagined actions was decoded from the spatial patterns of BOLD signals they evoked in premotor and posterior parietal cortices using multivoxel pattern analysis. Results showed that the content of motor imagery (i.e., the action type) could be decoded significantly above chance level from the spatial patterns of BOLD signals in both frontal (PMC, M1) and parietal areas (SPL, IPL, IPS). An exploratory searchlight analysis revealed significant clusters motor- and motor-associated cortices, as well as in visual cortices. Hence, the data provide evidence that patterns of activity within premotor and posterior parietal cortex vary systematically with the specific type of hand action being imagined.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/hbm.23015DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4737127PMC
January 2016

Comparing different stimulus configurations for population receptive field mapping in human fMRI.

Front Hum Neurosci 2015 20;9:96. Epub 2015 Feb 20.

Institute of Cognitive Neuroscience, University College London London, UK ; Wellcome Trust Centre for Neuroimaging, University College London London, UK ; Experimental Psychology, University College London London, UK.

Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous "wedge and ring" stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2015.00096DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4335485PMC
March 2015

Larger extrastriate population receptive fields in autism spectrum disorders.

J Neurosci 2014 Feb;34(7):2713-24

Wellcome Trust Centre for Neuroimaging, Institute of Cognitive Neuroscience, Cognitive Perceptual and Brain Sciences, and Institute of Ophthalmology, University College London, London EC1V 9EL, United Kingdom.

Previous behavioral research suggests enhanced local visual processing in individuals with autism spectrum disorders (ASDs). Here we used functional MRI and population receptive field (pRF) analysis to test whether the response selectivity of human visual cortex is atypical in individuals with high-functioning ASDs compared with neurotypical, demographically matched controls. For each voxel, we fitted a pRF model to fMRI signals measured while participants viewed flickering bar stimuli traversing the visual field. In most extrastriate regions, perifoveal pRFs were larger in the ASD group than in controls. We observed no differences in V1 or V3A. Differences in the hemodynamic response function, eye movements, or increased measurement noise could not account for these results; individuals with ASDs showed stronger, more reliable responses to visual stimulation. Interestingly, pRF sizes also correlated with individual differences in autistic traits but there were no correlations with behavioral measures of visual processing. Our findings thus suggest that visual cortex in ASDs is not characterized by sharper spatial selectivity. Instead, we speculate that visual cortical function in ASDs may be characterized by extrastriate cortical hyperexcitability or differential attentional deployment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.4416-13.2014DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3921434PMC
February 2014

Perceptual load affects spatial tuning of neuronal populations in human early visual cortex.

Curr Biol 2014 Jan;24(2):R66-R67

Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, London WC1N 3AR, UK; Wellcome Trust Centre for Neuroimaging, University College London, Alexandra House, 17 Queen Square, London WC1N 3AR, UK.

Withdrawal of attention from a visual scene as a result of perceptual load modulates overall levels of activity in human visual cortex [1], but its effects on cortical spatial tuning properties are unknown. Here we show attentional load at fixation affects the spatial tuning of population receptive fields (pRFs) in early visual cortex (V1-3) using functional magnetic resonance imaging (fMRI). We found that, compared to low perceptual load, high perceptual load yielded a 'blurrier' representation of the visual field surrounding the attended location and a centrifugal 'repulsion' of pRFs. Additional data and control analyses confirmed that these effects were neither due to changes in overall activity levels nor to eye movements. These findings suggest neural 'tunnel vision' as a form of distractor suppression under high perceptual load.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cub.2013.11.061DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3928995PMC
January 2014

The duration of a co-occurring sound modulates visual detection performance in humans.

PLoS One 2013 23;8(1):e54789. Epub 2013 Jan 23.

Wellcome Trust Centre for Neuroimaging at UCL, University College London, London, United Kingdom.

Background: The duration of sounds can affect the perceived duration of co-occurring visual stimuli. However, it is unclear whether this is limited to amodal processes of duration perception or affects other non-temporal qualities of visual perception.

Methodology/principal Findings: Here, we tested the hypothesis that visual sensitivity--rather than only the perceived duration of visual stimuli--can be affected by the duration of co-occurring sounds. We found that visual detection sensitivity (d') for unimodal stimuli was higher for stimuli of longer duration. Crucially, in a cross-modal condition, we replicated previous unimodal findings, observing that visual sensitivity was shaped by the duration of co-occurring sounds. When short visual stimuli (∼24 ms) were accompanied by sounds of matching duration, visual sensitivity was decreased relative to the unimodal visual condition. However, when the same visual stimuli were accompanied by longer auditory stimuli (∼60-96 ms), visual sensitivity was increased relative to the performance for ∼24 ms auditory stimuli. Across participants, this sensitivity enhancement was observed within a critical time window of ∼60-96 ms. Moreover, the amplitude of this effect correlated with visual sensitivity enhancement found for longer lasting visual stimuli across participants.

Conclusions/significance: Our findings show that the duration of co-occurring sounds affects visual perception; it changes visual sensitivity in a similar way as altering the (actual) duration of the visual stimuli does.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0054789PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3552845PMC
June 2013

Auditory modulation of visual stimulus encoding in human retinotopic cortex.

Neuroimage 2013 Apr 5;70:258-67. Epub 2013 Jan 5.

UCL Institute of Cognitive Neuroscience, 17 Queen Square, London WC1N 3BG, UK.

Sounds can modulate visual perception as well as neural activity in retinotopic cortex. Most studies in this context investigated how sounds change neural amplitude and oscillatory phase reset in visual cortex. However, recent studies in macaque monkeys show that congruence of audio-visual stimuli also modulates the amount of stimulus information carried by spiking activity of primary auditory and visual neurons. Here, we used naturalistic video stimuli and recorded the spatial patterns of functional MRI signals in human retinotopic cortex to test whether the discriminability of such patterns varied with the presence and congruence of co-occurring sounds. We found that incongruent sounds significantly impaired stimulus decoding from area V2 and there was a similar trend for V3. This effect was associated with reduced inter-trial reliability of patterns (i.e. higher levels of noise), but was not accompanied by any detectable modulation of overall signal amplitude. We conclude that sounds modulate naturalistic stimulus encoding in early human retinotopic cortex without affecting overall signal amplitude. Subthreshold modulation, oscillatory phase reset and dynamic attentional modulation are candidate neural and cognitive mechanisms mediating these effects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2012.12.061DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3625122PMC
April 2013

Grey matter volume in early human visual cortex predicts proneness to the sound-induced flash illusion.

Proc Biol Sci 2012 Dec 24;279(1749):4955-61. Epub 2012 Oct 24.

University College London Institute of Cognitive Neuroscience, 17 Queen Square, London WC1N 3BG, UK.

Visual perception can be modulated by sounds. A drastic example of this is the sound-induced flash illusion: when a single flash is accompanied by two bleeps, it is sometimes perceived in an illusory fashion as two consecutive flashes. However, there are strong individual differences in proneness to this illusion. Some participants experience the illusion on almost every trial, whereas others almost never do. We investigated whether such individual differences in proneness to the sound-induced flash illusion were reflected in structural differences in brain regions whose activity is modulated by the illusion. We found that individual differences in proneness to the illusion were strongly and significantly correlated with local grey matter volume in early retinotopic visual cortex. Participants with smaller early visual cortices were more prone to the illusion. We propose that strength of auditory influences on visual perception is determined by individual differences in recurrent connections, cross-modal attention and/or optimal weighting of sensory channels.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1098/rspb.2012.2132DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3497249PMC
December 2012

Better ways to improve standards in brain-behavior correlation analysis.

Front Hum Neurosci 2012 16;6:200. Epub 2012 Jul 16.

Wellcome Trust Centre for Neuroimaging at University College London London, UK.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2012.00200DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3397314PMC
October 2012

Auditory Stimulus Timing Influences Perceived duration of Co-Occurring Visual Stimuli.

Front Psychol 2011 8;2:215. Epub 2011 Sep 8.

Wellcome Trust Centre for Neuroimaging at UCL, University College London London, UK.

There is increasing interest in multisensory influences upon sensory-specific judgments, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audio-visual congruent) or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audio-visual incongruent); or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled) than the flashes. Results showed that visual duration discrimination sensitivity (d') was significantly higher for congruent (and significantly lower for incongruent) audio-visual synchronous combinations, relative to the visual-only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fpsyg.2011.00215DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3168883PMC
November 2011