Publications by authors named "David Aagten-Murphy"

16 Publications

  • Page 1 of 1

Transsaccadic integration operates independently in different feature dimensions.

J Vis 2021 07;21(7)

Department of Psychology, University of Cambridge, Cambridge, UK.

Our knowledge about objects in our environment reflects an integration of current visual input with information from preceding gaze fixations. Such a mechanism may reduce uncertainty but requires the visual system to determine which information obtained in different fixations should be combined or kept separate. To investigate the basis of this decision, we conducted three experiments. Participants viewed a stimulus in their peripheral vision and then made a saccade that shifted the object into the opposite hemifield. During the saccade, the object underwent changes of varying magnitude in two feature dimensions (Experiment 1, color and location; Experiments 2 and 3, color and orientation). Participants reported whether they detected any change and estimated one of the postsaccadic features. Integration of presaccadic with postsaccadic input was observed as a bias in estimates toward the presaccadic feature value. In all experiments, presaccadic bias weakened as the magnitude of the transsaccadic change in the estimated feature increased. Changes in the other feature, despite having a similar probability of detection, had no effect on integration. Results were quantitatively captured by an observer model where the decision whether to integrate information from sequential fixations is made independently for each feature and coupled to awareness of a feature change.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1167/jov.21.7.7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8288057PMC
July 2021

Transsaccadic integration relies on a limited memory resource.

J Vis 2021 05;21(5):24

Department of Psychology, University of Cambridge, UK.

Saccadic eye movements cause large-scale transformations of the image falling on the retina. Rather than starting visual processing anew after each saccade, the visual system combines post-saccadic information with visual input from before the saccade. Crucially, the relative contribution of each source of information is weighted according to its precision, consistent with principles of optimal integration. We reasoned that, if pre-saccadic input is maintained in a resource-limited store, such as visual working memory, its precision will depend on the number of items stored, as well as their attentional priority. Observers estimated the color of stimuli that changed imperceptibly during a saccade, and we examined where reports fell on the continuum between pre- and post-saccadic values. Bias toward the post-saccadic color increased with the set size of the pre-saccadic display, consistent with an increased weighting of the post-saccadic input as precision of the pre-saccadic representation declined. In a second experiment, we investigated if transsaccadic memory resources are preferentially allocated to attentionally prioritized items. An arrow cue indicated one pre-saccadic item as more likely to be chosen for report. As predicted, valid cues increased response precision and biased responses toward the pre-saccadic color. We conclude that transsaccadic integration relies on a limited memory resource that is flexibly distributed between pre-saccadic stimuli.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1167/jov.21.5.24DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8142717PMC
May 2021

Sounds are remapped across saccades.

Sci Rep 2020 12 7;10(1):21332. Epub 2020 Dec 7.

Allgemeine und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany.

To achieve visual space constancy, our brain remaps eye-centered projections of visual objects across saccades. Here, we measured saccade trajectory curvature following the presentation of visual, auditory, and audiovisual distractors in a double-step saccade task to investigate if this stability mechanism also accounts for localized sounds. We found that saccade trajectories systematically curved away from the position at which either a light or a sound was presented, suggesting that both modalities are represented in eye-centered oculomotor centers. Importantly, the same effect was observed when the distractor preceded the execution of the first saccade. These results suggest that oculomotor centers keep track of visual, auditory and audiovisual objects by remapping their eye-centered representations across saccades. Furthermore, they argue for the existence of a supra-modal map which keeps track of multi-sensory object locations across our movements to create an impression of space constancy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-78163-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7721892PMC
December 2020

Independent working memory resources for egocentric and allocentric spatial information.

PLoS Comput Biol 2019 02 21;15(2):e1006563. Epub 2019 Feb 21.

Department of Psychology, University of Cambridge, Cambridge, United Kingdom.

Visuospatial working memory enables us to maintain access to visual information for processing even when a stimulus is no longer present, due to occlusion, our own movements, or transience of the stimulus. Here we show that, when localizing remembered stimuli, the precision of spatial recall does not rely solely on memory for individual stimuli, but additionally depends on the relative distances between stimuli and visual landmarks in the surroundings. Across three separate experiments, we consistently observed a spatially selective improvement in the precision of recall for items located near a persistent landmark. While the results did not require that the landmark be visible throughout the memory delay period, it was essential that it was visible both during encoding and response. We present a simple model that can accurately capture human performance by considering relative (allocentric) spatial information as an independent localization estimate which degrades with distance and is optimally integrated with egocentric spatial information. Critically, allocentric information was encoded without cost to egocentric estimation, demonstrating independent storage of the two sources of information. Finally, when egocentric and allocentric estimates were put in conflict, the model successfully predicted the resulting localization errors. We suggest that the relative distance between stimuli represents an additional, independent spatial cue for memory recall. This cue information is likely to be critical for spatial localization in natural settings which contain an abundance of visual landmarks.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pcbi.1006563DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6400418PMC
February 2019

Functions of Memory Across Saccadic Eye Movements.

Curr Top Behav Neurosci 2019 ;41:155-183

University of Cambridge, Cambridge, UK.

Several times per second, humans make rapid eye movements called saccades which redirect their gaze to sample new regions of external space. Saccades present unique challenges to both perceptual and motor systems. During the movement, the visual input is smeared across the retina and severely degraded. Once completed, the projection of the world onto the retina has undergone a large-scale spatial transformation. The vector of this transformation, and the new orientation of the eye in the external world, is uncertain. Memory for the pre-saccadic visual input is thought to play a central role in compensating for the disruption caused by saccades. Here, we review evidence that memory contributes to (1) detecting and identifying changes in the world that occur during a saccade, (2) bridging the gap in input so that visual processing does not have to start anew, and (3) correcting saccade errors and recalibrating the oculomotor system to ensure accuracy of future saccades. We argue that visual working memory (VWM) is the most likely candidate system to underlie these behaviours and assess the consequences of VWM's strict resource limitations for transsaccadic processing. We conclude that a full understanding of these processes will require progress on broader unsolved problems in psychology and neuroscience, in particular how the brain solves the object correspondence problem, to what extent prior beliefs influence visual perception, and how disparate signals arriving with different delays are integrated.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/7854_2018_66DOI Listing
January 2020

Independent selection of eye and hand targets suggests effector-specific attentional mechanisms.

Sci Rep 2018 06 21;8(1):9434. Epub 2018 Jun 21.

Allgemeine und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität München, München, Germany.

Both eye and hand movements bind visual attention to their target locations during movement preparation. However, it remains contentious whether eye and hand targets are selected jointly by a single selection system, or individually by independent systems. To unravel the controversy, we investigated the deployment of visual attention - a proxy of motor target selection - in coordinated eye-hand movements. Results show that attention builds up in parallel both at the eye and the hand target. Importantly, the allocation of attention to one effector's motor target was not affected by the concurrent preparation of the other effector's movement at any time during movement preparation. This demonstrates that eye and hand targets are represented in separate, effector-specific maps of action-relevant locations. The eye-hand synchronisation that is frequently observed on the behavioral level must emerge from mutual influences of the two effector systems at later, post-attentional processing stages.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-018-27723-4DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6013452PMC
June 2018

Automatic and intentional influences on saccade landing.

J Neurophysiol 2017 08 24;118(2):1105-1122. Epub 2017 May 24.

Department of Psychology, University of Cambridge, Cambridge, United Kingdom.

Saccadic eye movements enable us to rapidly direct our high-resolution fovea onto relevant parts of the visual world. However, while we can intentionally select a location as a saccade target, the wider visual scene also influences our executed movements. In the presence of multiple objects, eye movements may be "captured" to the location of a distractor object, or be biased toward the intermediate position between objects (the "global effect"). Here we examined how the relative strengths of the global effect and visual object capture changed with saccade latency, the separation between visual items and stimulus contrast. Importantly, while many previous studies have omitted giving observers explicit instructions, we instructed participants to either saccade to a specified target object or to the midpoint between two stimuli. This allowed us to examine how their explicit movement goal influenced the likelihood that their saccades terminated at either the target, distractor, or intermediate locations. Using a probabilistic mixture model, we found evidence that both visual object capture and the global effect co-occurred at short latencies and declined as latency increased. As object separation increased, capture came to dominate the landing positions of fast saccades, with reduced global effect. Using the mixture model fits, we dissociated the proportion of unavoidably captured saccades to each location from those intentionally directed to the task goal. From this we could extract the time course of competition between automatic capture and intentional targeting. We show that task instructions substantially altered the distribution of saccade landing points, even at the shortest latencies. When making an eye movement to a target location, the presence of a nearby distractor can cause the saccade to unintentionally terminate at the distractor itself or the average position in between stimuli. With probabilistic mixture models, we quantified how both unavoidable capture and goal-directed targeting were influenced by changing the task and the target-distractor separation. Using this novel technique, we could extract the time course over which automatic and intentional processes compete for control of saccades.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00141.2017DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5547269PMC
August 2017

Adaptation to numerosity requires only brief exposures, and is determined by number of events, not exposure duration.

J Vis 2016 08;16(10):22

Exposure to a patch of dots produces a repulsive shift in the perceived numerosity of subsequently viewed dot patches. Although a remarkably strong effect, in which the perceived numerosity can be shifted by up to 50% of the actual numerosity, very little is known about the temporal dynamics. Here we demonstrate a novel adaptation paradigm that allows numerosity adaptation to be rapidly induced at several distinct locations simultaneously. We show that not only is this adaptation to numerosity spatially specific, with different locations of the visual field able to be adapted to high, low, or neutral stimuli, but it can occur with only very brief periods of adaptation. Further investigation revealed that the adaptation effect was primarily driven by the number of unique adapting events that had occurred and not by either the duration of each event or the total duration of exposure to adapting stimuli. This event-based numerosity adaptation appears to fit well with statistical models of adaptation in which the dynamic adjustment of perceptual experiences, based on both the previous experience of the stimuli and the current percept, acts to optimize the limited working range of perception. These results implicate a highly plastic mechanism for numerosity perception, which is dependent on the number of discrete adaptation events, and also demonstrate a quick and efficient paradigm suitable for examining the temporal properties of adaptation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5053365PMC
http://dx.doi.org/10.1167/16.10.22DOI Listing
August 2016

Central tendency effects in time interval reproduction in autism.

Sci Rep 2016 06 28;6:28570. Epub 2016 Jun 28.

Centre for Research in Autism and Education (CRAE), Department of Psychology and Human Development, UCL Institute of Education, University College London, London, WC1H 0NU, UK.

Central tendency, the tendency of judgements of quantities (lengths, durations etc.) to gravitate towards their mean, is one of the most robust perceptual effects. A Bayesian account has recently suggested that central tendency reflects the integration of noisy sensory estimates with prior knowledge representations of a mean stimulus, serving to improve performance. The process is flexible, so prior knowledge is weighted more heavily when sensory estimates are imprecise, requiring more integration to reduce noise. In this study we measure central tendency in autism to evaluate a recent theoretical hypothesis suggesting that autistic perception relies less on prior knowledge representations than typical perception. If true, autistic children should show reduced central tendency than theoretically predicted from their temporal resolution. We tested autistic and age- and ability-matched typical children in two child-friendly tasks: (1) a time interval reproduction task, measuring central tendency in the temporal domain; and (2) a time discrimination task, assessing temporal resolution. Central tendency reduced with age in typical development, while temporal resolution improved. Autistic children performed far worse in temporal discrimination than the matched controls. Computational simulations suggested that central tendency was much less in autistic children than predicted by theoretical modelling, given their poor temporal resolution.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/srep28570DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4923867PMC
June 2016

Children with autism spectrum disorder show reduced adaptation to number.

Proc Natl Acad Sci U S A 2015 Jun 8;112(25):7868-72. Epub 2015 Jun 8.

School of Psychology, University of Western Australia, Perth, 6009, Australia; Centre for Research in Autism and Education, Department of Psychology and Human Development, University College London Institute of Education, University College London, London WC1H 0NU, United Kingdom

Autism is known to be associated with major perceptual atypicalities. We have recently proposed a general model to account for these atypicalities in Bayesian terms, suggesting that autistic individuals underuse predictive information or priors. We tested this idea by measuring adaptation to numerosity stimuli in children diagnosed with autism spectrum disorder (ASD). After exposure to large numbers of items, stimuli with fewer items appear to be less numerous (and vice versa). We found that children with ASD adapted much less to numerosity than typically developing children, although their precision for numerosity discrimination was similar to that of the typical group. This result reinforces recent findings showing reduced adaptation to facial identity in ASD and goes on to show that reduced adaptation is not unique to faces (social stimuli with special significance in autism), but occurs more generally, for both parietal and temporal functions, probably reflecting inefficiencies in the adaptive interpretation of sensory signals. These results provide strong support for the Bayesian theories of autism.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1073/pnas.1504099112DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4485114PMC
June 2015

Time, number and attention in very low birth weight children.

Neuropsychologia 2015 Jul 28;73:60-9. Epub 2015 Apr 28.

Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa, Italy; Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy.

Premature birth has been associated with damage in many regions of the cerebral cortex, although there is a particularly strong susceptibility for damage within the parieto-occipital lobes (Volpe, 2009). As these areas have been shown to be critical for both visual attention and magnitudes perception (time, space, and number), it is important to investigate the impact of prematurity on both the magnitude and attentional systems, particularly for children without overt white matter injuries, where the lack of obvious injury may cause their difficulties to remain unnoticed. In this study, we investigated the ability to judge time intervals (visual, audio and audio-visual temporal bisection), discriminate between numerical quantities (numerosity comparison), map numbers onto space (numberline task) and to maintain visuo-spatial attention (multiple-object-tracking) in school-age preterm children (N29). The results show that various parietal functions may be more or less robust to prematurity-related difficulties, with strong impairments found on time estimation and attentional task, while numerical discrimination or mapping tasks remained relatively unimpaired. Thus while our study generally supports the hypothesis of a dorsal stream vulnerability in children born preterm relative to other cortical locations, it further suggests that particular cognitive processes, as highlighted by performance on different tasks, are far more susceptible than others.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuropsychologia.2015.04.016DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5040499PMC
July 2015

Numerical Estimation in Children With Autism.

Autism Res 2015 Dec 25;8(6):668-81. Epub 2015 Mar 25.

Centre for Research in Autism and Education (CRAE), UCL Institute of Education, University College London, UK.

Number skills are often reported anecdotally and in the mass media as a relative strength for individuals with autism, yet there are remarkably few research studies addressing this issue. This study, therefore, sought to examine autistic children's number estimation skills and whether variation in these skills can explain at least in part strengths and weaknesses in children's mathematical achievement. Thirty-two cognitively able children with autism (range = 8-13 years) and 32 typical children of similar age and ability were administered a standardized test of mathematical achievement and two estimation tasks, one psychophysical nonsymbolic estimation (numerosity discrimination) task and one symbolic estimation (numberline) task. Children with autism performed worse than typical children on the numerosity task, on the numberline task, which required mapping numerical values onto space, and on the test of mathematical achievement. These findings question the widespread belief that mathematical skills are generally enhanced in autism. For both groups of children, variation in performance on the numberline task was also uniquely related to their academic achievement, over and above variation in intellectual ability; better number-to-space mapping skills went hand-in-hand with better arithmetic skills. Future research should further determine the extent and underlying causes of some autistic children's difficulties with regards to number.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/aur.1482DOI Listing
December 2015

Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

Acta Psychol (Amst) 2014 Mar 31;147:25-33. Epub 2013 Oct 31.

Department of Neuroscience, University of Florence, Florence 50125, Italy; CNR Institute of Neuroscience, Pisa 56100, Italy.

Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.actpsy.2013.10.007DOI Listing
March 2014

The development of speed discrimination abilities.

Vision Res 2012 Oct 17;70:27-33. Epub 2012 Aug 17.

Centre for Research in Autism and Education (CRAE), Department of Psychology and Human Development, Institute of Education, University of London, London, United Kingdom.

The processing of speed is a critical part of a child's visual development, allowing children to track and interact with moving objects. Despite such importance, no study has investigated the developmental trajectory of speed discrimination abilities or precisely when these abilities become adult-like. Here, we measured speed discrimination thresholds in 5-, 7-, 9-, 11-year-olds and adults using random dot stimuli with two different reference speeds (slow: 1.5 deg/s; fast: 6 deg/s). Sensitivity for both reference speeds improved exponentially with age and, at all ages, participants were more sensitive to the faster reference speed. However, sensitivity to slow speeds followed a more protracted developmental trajectory than that for faster speeds. Furthermore, sensitivity to the faster reference speed reached adult-like levels by 11 years, whereas sensitivity to the slower reference speed was not yet adult-like by this age. Different developmental trajectories may reflect distinct systems for processing fast and slow speeds. The reasonably late development of speed processing abilities may be due to inherent limits in the integration of neuronal responses in motion-sensitive areas in early childhood.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.visres.2012.08.004DOI Listing
October 2012

A comparative study of face processing using scrambled faces.

Perception 2012 ;41(4):460-73

Yerkes National Primate Research Centre, Emory University, Atlanta, GA 30322, USA;

It is a widespread assumption that all primate species process faces in the same way because the species are closely related and they engage in similar social interactions. However, this approach ignores potentially interesting and informative differences that may exist between species. This paper describes a comparative study of holistic face processing. Twelve subjects (six chimpanzees Pan troglodytes and six rhesus monkeys Macaca mulatta) were trained to discriminate whole faces (faces with features in their canonical position) and feature-scrambled faces in two separate conditions. We found that both species tended to match the global configuration of features over local features, providing strong evidence of global precedence. In addition, we show that both species were better able to generalize from a learned configuration to an entirely novel configuration when they were first trained to match feature-scrambled faces compared to when they were trained with whole faces. This result implies that the subjects were able to access local information easier when facial features were presented in a scrambled configuration and is consistent with a holistic processing hypothesis. Interestingly, these data also suggest that, while holistic processing in chimpanzees is tuned to own-species faces, monkeys have a more general approach towards all faces. Thus, while these data confirm that both chimpanzees and rhesus monkeys process faces holistically, they also indicate that there are differences between the species that warrant further investigation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1068/p7151DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4467555PMC
September 2012

The role of holistic processing in face perception: evidence from the face inversion effect.

Vision Res 2011 Jun 7;51(11):1273-8. Epub 2011 Apr 7.

School of Psychology, The University of Sydney, Sydney, New South Wales 2006, Australia.

A large body of research supports the hypothesis that the human visual system does not process a face as a collection of separable facial features but as an integrated perceptual whole. One common assumption is that we quickly build holistic representations to extract useful second-order information provided by the variation between the faces of different individuals. An alternative account suggests holistic processing is a fast, early grouping process that first serves to distinguish faces from other competing objects. From this perspective, holistic processing is a quick initial response to the first-order information present in every face. To test this hypothesis we developed a novel paradigm for measuring the face inversion effect, a standard marker of holistic face processing, that measures the minimum exposure time required to discriminate between two stimuli. These new data demonstrate that holistic processing operates on whole upright faces, regardless of whether subjects are required to extract first- or second-level information. In light of this, we argue that holistic processing is a general mechanism that may occur at an earlier stage of face perception than individual discrimination to support the rapid detection of face stimuli in everyday visual scenes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.visres.2011.04.002DOI Listing
June 2011
-->