3,692 results match your criteria Journal of Vision [Journal]


Scotopic contour and shape discrimination using radial frequency patterns.

J Vis 2019 Feb;19(2)

Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.

Radial frequency (RF) patterns are valuable tools for investigations of contour integration and shape discrimination. Under photopic conditions, healthy observers can detect deformations from circularity in RF patterns as small as 3 seconds of arc. Such fine discrimination may be facilitated by cortical curvature detectors or global shape-detecting mechanisms that favor a closed contour. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.2.7
Publisher Site
http://dx.doi.org/10.1167/19.2.7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6372011PMC
February 2019
1 Read

Scene categorization in the presence of a distractor.

Authors:
Jirí Lukavský

J Vis 2019 Feb;19(2)

Institute of Psychology, Czech Academy of Sciences, Prague, Czech Republic.

Humans display a very good understanding of the content in briefly presented photographs. To achieve this understanding, humans rely on information from both high-acuity central vision and peripheral vision. Previous studies have investigated the relative contribution of central and peripheral vision. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.2.6DOI Listing
February 2019

Decoding go/no-go decisions from eye movements.

J Vis 2019 Feb;19(2)

Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.

Neural activity in brain areas involved in the planning and execution of eye movements predicts the outcome of an upcoming perceptual decision. Many real-world decisions, such as whether to swing at a baseball pitch, are accompanied by characteristic eye-movement behavior. Here we ask whether human eye-movement kinematics can sensitively predict decision outcomes in a go/no-go task requiring rapid interceptive hand movements. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.2.5DOI Listing
February 2019

Visual communication of how fabrics feel.

J Vis 2019 Feb;19(2)

Department of Psychology, New York University Abu Dhabi, Abu Dhabi, UAE.

Although product photos and movies are abundantly present in online shopping environments, little is known about how much of the real product experience they capture. While previous studies have shown that movies or interactive imagery give users the impression that these communication forms are more effective, there are no studies addressing this issue quantitatively. We used nine different samples of jeans, because in general fabrics represent a large and interesting product category and specifically because jeans can visually be rather similar while haptically be rather different. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.2.4DOI Listing
February 2019

When predictions fail: Correction for extrapolation in the flash-grab effect.

J Vis 2019 Feb;19(2)

Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Australia.

Motion-induced position shifts constitute a broad class of visual illusions in which motion and position signals interact in the human visual pathway. In such illusions, the presence of visual motion distorts the perceived positions of objects in nearby space. Predictive mechanisms, which could contribute to compensating for processing delays due to neural transmission, have been given as an explanation. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.2.3DOI Listing
February 2019
1 Read

Hole superiority effect with 3D figures formed by binocular disparity.

J Vis 2019 Feb;19(2)

CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China.

The global-first theory of topological perception claims that topological perception is prior to the perception of local features (e.g., Chen, 1982, 2005). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.2.2DOI Listing
February 2019
1 Read

The contrast sensitivity function of a small cryptobenthic marine fish.

J Vis 2019 Feb;19(2)

Animal Evolutionary Ecology, Institute of Evolution and Ecology, Department of Biology, Faculty of Science, University of Tübingen, Tübingen, Germany.

Spatial resolution is a key property of eyes when it comes to understanding how animals' visual signals are perceived. This property can be robustly estimated by measuring the contrast sensitivity as a function of different spatial frequencies, defined as the number of achromatic vertical bright and dark stripe pairs within one degree of visual angle. This contrast sensitivity function (CSF) has been estimated for different animal groups, but data on fish are limited to two free-swimming, freshwater species (i. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.2.1DOI Listing
February 2019

Corrections.

Authors:

J Vis 2019 Jan;19(1):18

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.18DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6357808PMC
January 2019

Recurrence quantification analysis of eye movements during mental imagery.

J Vis 2019 Jan;19(1):17

Department of Psychology, University of Bern, Bern, Switzerland.

Several studies demonstrated similarities of eye fixations during mental imagery and visual perception but-to our knowledge-the temporal characteristics of eye movements during imagery have not yet been considered in detail. To fill this gap, the same data is analyzed with conventional spatial techniques such as analysis of areas of interest (AOI), ScanMatch, and MultiMatch and with recurrence quantification analysis (RQA), a new way of analyzing gaze data by tracking re-fixations and their temporal dynamics. Participants viewed and afterwards imagined three different kinds of pictures (art, faces, and landscapes) while their eye movements were recorded. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.1.17
Publisher Site
http://dx.doi.org/10.1167/19.1.17DOI Listing
January 2019
2 Reads

A reevaluation of Whittle (1986, 1992) reveals the link between detection thresholds, discrimination thresholds, and brightness perception.

J Vis 2019 Jan;19(1):16

Universitat Pompeu Fabra, Barcelona, Spain.

In 1986, Paul Whittle investigated the ability to discriminate between the luminance of two small patches viewed upon a uniform background. In 1992, Paul Whittle asked subjects to manipulate the luminance of a number of patches on a uniform background until their brightness appeared to vary from black to white with even steps. The data from the discrimination experiment almost perfectly predicted the gradient of the function obtained in the brightness experiment, indicating that the two experimental methodologies were probing the same underlying mechanism. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.16DOI Listing
January 2019

The effects of age and cognitive load on peripheral-detection performance.

J Vis 2019 Jan;19(1):15

Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA.

Age-related declines in both peripheral vision and cognitive resources could contribute to the increased crash risk of older drivers. However, it is unclear whether increases in age and cognitive load result in equal detriments to detection rates across all peripheral target eccentricities (general interference effect) or whether these detriments become greater with increasing eccentricity (tunnel effect). In the current study we investigated the effects of age and cognitive load on the detection of peripheral motorcycle targets (at 5°-30° eccentricity) in static images of intersections. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.15DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6348997PMC
January 2019

Scene layout priming relies primarily on low-level features rather than scene layout.

J Vis 2019 Jan;19(1):14

Department of Psychology, University of California, San Diego, CA, USA.

The ability to perceive and remember the spatial layout of a scene is critical to understanding the visual world, both for navigation and for other complex tasks that depend upon the structure of the current environment. However, surprisingly little work has investigated how and when scene layout information is maintained in memory. One prominent line of work investigating this issue is a scene-priming paradigm (e. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.1.14
Publisher Site
http://dx.doi.org/10.1167/19.1.14DOI Listing
January 2019
5 Reads

Chromatic and luminance sensitivity for skin and skinlike textures.

J Vis 2019 Jan;19(1):13

Department of Psychological Sciences, University of Liverpool, Liverpool, UK.

Despite the importance of the appearance of human skin for theoretical and practical purposes, little is known about visual sensitivity to subtle skin-tone changes, and whether the human visual system is indeed optimized to discern skin-color changes that confer some evolutionary advantage. Here, we report discrimination thresholds in a three-dimensional chromatic-luminance color space for natural skin and skinlike textures, and compare these to thresholds for uniform stimuli of the same mean color. We find no evidence that discrimination performance is superior along evolutionarily relevant color directions. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.13DOI Listing
January 2019

Temporal attention improves perception similarly at foveal and parafoveal locations.

J Vis 2019 Jan;19(1):12

Department of Psychology & Center for Neural Science, New York University, New York, NY, USA.

Temporal attention, the prioritization of information at a specific point in time, improves visual performance, but it is unknown whether it does so to the same extent across the visual field. This knowledge is necessary to establish whether temporal attention compensates for heterogeneities in discriminability and speed of processing across the visual field. Discriminability and rate of information accrual depend on eccentricity as well as on polar angle, a characteristic known as performance fields. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.1.12
Publisher Site
http://dx.doi.org/10.1167/19.1.12DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6336355PMC
January 2019
5 Reads

Does task relevance shape the 'shift to global' in ambiguous motion perception?

J Vis 2019 Jan;19(1)

Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven, Belgium.

Perception can differ even when the stimulus information is the same. Previous studies have demonstrated the importance of experience and relevance on visual perception. We examined the influence of perceptual relevance in an auxiliary task on subsequent perception of an ambiguous stimulus. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.8DOI Listing
January 2019
1 Read

The motion-induced contour revisited: Observations on 3-D structure and illusory contour formation in moving stimuli.

J Vis 2019 Jan;19(1)

Department of Psychology, University of Nevada, Reno, NV, USA.

The motion-induced contour (MIC) was first described by Victor Klymenko and Naomi Weisstein in a series of papers in the 1980s. The effect is created by rotating the outline of a tilted cube in depth. When one of the vertical edges is removed, an illusory contour can be seen in its place. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6336206PMC
January 2019

A top-down saliency model with goal relevance.

J Vis 2019 Jan;19(1):11

Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.

Most visual saliency models that integrate top-down factors process task and context information using machine learning techniques. Although these methods have been successful in improving prediction accuracy for human attention, they require significant training data and are unable to provide an understanding of what makes information relevant to a task such that it will attract gaze. This means that we still lack a general theory for the interaction between task and attention or eye movements. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.1.11
Publisher Site
http://dx.doi.org/10.1167/19.1.11DOI Listing
January 2019
4 Reads

Age-related changes in local and global visual perception.

J Vis 2019 Jan;19(1):10

State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.

Over the past 40 years, research has addressed the impact of the aging process on various aspects of visual function. Most studies have focused on age-related visual impairment in low-level local features of visual objects, such as orientation, contrast sensitivity and spatial frequency. However, whether there are lifespan changes in global visual perception is still unclear. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.1.10
Publisher Site
http://dx.doi.org/10.1167/19.1.10DOI Listing
January 2019
8 Reads

V1-based modeling of discrimination between natural scenes within the luminance and isoluminant color planes.

J Vis 2019 Jan;19(1)

Emmanuel College, University of Cambridge, Cambridge, UK.

We have been developing a computational visual difference predictor model that can predict how human observers rate the perceived magnitude of suprathreshold differences between pairs of full-color naturalistic scenes (To, Lovell, Troscianko, & Tolhurst, 2010). The model is based closely on V1 neurophysiology and has recently been updated to more realistically implement sequential application of nonlinear inhibitions (contrast normalization followed by surround suppression; To, Chirimuuta, & Tolhurst, 2017). The model is based originally on a reliable luminance model (Watson & Solomon, 1997) which we have extended to the red/green and blue/yellow opponent planes, assuming that the three planes (luminance, red/green, and blue/yellow) can be modeled similarly to each other with narrow-band oriented filters. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.9DOI Listing
January 2019

Transient and sustained effects of stimulus properties on the generation of microsaccades.

J Vis 2019 Jan;19(1)

Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.

Saccades shift the gaze rapidly every few hundred milliseconds from one fixated location to the next, producing a flow of visual input into the visual system even in the absence of changes in the environment. During fixation, small saccades called microsaccades are produced 1-3 times per second, generating a flow of visual input. The characteristics of this visual flow are determined by the timings of the saccades and by the characteristics of the visual stimuli on which they are performed. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.6DOI Listing
January 2019

Did I do that? Detecting a perturbation to visual feedback in a reaching task.

J Vis 2019 Jan;19(1)

Departments of Psychology and Center for Neural Science, New York University, New York, NY, USA.

The motor system executes actions in a highly stereotyped manner despite the high number of degrees of freedom available. Studies of motor adaptation leverage this fact by disrupting, or perturbing, visual feedback to measure how the motor system compensates. To elicit detectable effects, perturbations are often large compared to trial-to-trial reach endpoint variability. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.1.5
Publisher Site
http://dx.doi.org/10.1167/19.1.5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6334820PMC
January 2019
6 Reads

Separating memoranda in depth increases visual working memory performance.

J Vis 2019 Jan;19(1)

Psychology Department, University of California San Diego, La Jolla, CA, USA.

Visual working memory is the mechanism supporting the continued maintenance of information after sensory inputs are removed. Although the capacity of visual working memory is limited, memoranda that are spaced farther apart on a 2-D display are easier to remember, potentially because neural representations are more distinct within retinotopically organized areas of visual cortex during memory encoding, maintenance, or retrieval. The impact on memory of spatial separability in depth is less clear, even though depth information is essential to guiding interactions with objects in the environment. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.4DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6333109PMC
January 2019
4 Reads

Predictive coding of visual motion in both monocular and binocular human visual processing.

J Vis 2019 Jan;19(1)

Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia.

Neural processing of sensory input in the brain takes time, and for that reason our awareness of visual events lags behind their actual occurrence. One way the brain might compensate to minimize the impact of the resulting delays is through extrapolation. Extrapolation mechanisms have been argued to underlie perceptual illusions in which moving and static stimuli are mislocalised relative to one another (such as the flash-lag and related effects). Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.1.3
Publisher Site
http://dx.doi.org/10.1167/19.1.3DOI Listing
January 2019
2 Reads

The separable effects of feature precision and item load in visual short-term memory.

J Vis 2019 Jan;19(1)

Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Australia.

Visual short-term memory (VSTM) has been described as being limited by the number of discrete visual objects, the aggregate quantity of information across multiple visual objects, or some combination of the two. Many recent studies examining these capacity limitations have shown that increasing the number of items in VSTM increases the frequency and magnitude of errors in a participant's recall of the stimulus. This increase in response dispersion has been interpreted as a loss of precision in an item's representation as the number of items in memory increases, possibly due to a change in the tuning of the underlying representation. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/19.1.2
Publisher Site
http://dx.doi.org/10.1167/19.1.2DOI Listing
January 2019
6 Reads

What pops out for you pops out for fish: Four common visual features.

J Vis 2019 Jan;19(1)

Life Sciences Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel.

Visual search is the ability to detect a target of interest against a background of distracting objects. For many animals, performing this task fast and accurately is crucial for survival. Typically, visual-search performance is measured by the time it takes the observer to detect a target against a backdrop of distractors. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/19.1.1DOI Listing
January 2019
1 Read

The Human Connectome Project 7 Tesla retinotopy dataset: Description and population receptive field analysis.

J Vis 2018 Dec;18(13):23

Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, USA.

About a quarter of human cerebral cortex is dedicated mainly to visual processing. The large-scale spatial organization of visual cortex can be measured with functional magnetic resonance imaging (fMRI) while subjects view spatially modulated visual stimuli, also known as "retinotopic mapping." One of the datasets collected by the Human Connectome Project involved ultrahigh-field (7 Tesla) fMRI retinotopic mapping in 181 healthy young adults (1. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.23DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6314247PMC
December 2018
1 Read

Cross-task perceptual learning of object recognition in simulated retinal implant perception.

J Vis 2018 Dec;18(13):22

Department of Psychology, Otto-von-Guericke University Magdeburg, Germany.

The perception gained by retina implants (RI) is limited, which asks for a learning regime to improve patients' visual perception. Here we simulated RI vision and investigated if object recognition in RI patients can be improved and maintained through training. Importantly, we asked if the trained object recognition can be generalized to a new task context, and to new viewpoints of the trained objects. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.2
Publisher Site
http://dx.doi.org/10.1167/18.13.22DOI Listing
December 2018
6 Reads

Prediction shapes peripheral appearance.

J Vis 2018 Dec;18(13):21

Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Giessen, Giessen, Germany.

Peripheral perception is limited in terms of visual acuity, contrast sensitivity, and positional uncertainty. In the present study we used an image-manipulation algorithm (the Eidolon Factory) based on a formal description of the visual field as a tool to investigate how peripheral stimuli appear in the presence of such limitations. Observers were asked to match central and peripheral stimuli, both configurations of superimposed geometric shapes and patches of natural images, in terms of the parameters controlling the amplitude of the perturbation (reach) and the cross-scale similarity of the perturbation (coherence). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.21DOI Listing
December 2018

Predictive remapping of visual features beyond saccadic targets.

J Vis 2018 Dec;18(13):20

Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.

Visual stability is thought to be mediated by predictive remapping of the relevant object information from its current, presaccadic location to its future, postsaccadic location on the retina. However, it is heavily debated whether and what feature information is predictively remapped during the presaccadic interval. Here we examined the spatial and featural properties of predictive remapping in a set of three psychophysical studies. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.20DOI Listing
December 2018

Temporal frequency modulates the strength of the inhibitory interaction between motion sensors tuned to coarse and fine scales.

J Vis 2018 Dec;18(13):17

Faculty of Psychology, Universidad Complutense de Madrid, Madrid, Spain.

The perceived direction of motion of a brief moving fine scale pattern reverses when a static coarse scale pattern is added to it (Henning & Derrington, 1988). This impairment in motion direction discrimination has been explained by the inhibitory interaction between motion sensors tuned to fine and coarse scales. This interaction depends on the particular spatial frequencies mixed, the size of the stimulus, and the relative contrast of the components (Serrano-Pedraza, Goddard, & Derrington, 2007; Serrano-Pedraza & Derrington, 2010). Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.17DOI Listing
December 2018

Computational luminance constancy from naturalistic images.

J Vis 2018 Dec;18(13):19

Neuroscience Graduate Group, Bioengineering Graduate Group, Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.

The human visual system supports stable percepts of object color even though the light that reflects from object surfaces varies significantly with the scene illumination. To understand the computations that support stable color perception, we study how estimating a target object's luminous reflectance factor (LRF; a measure of the light reflected from the object under a standard illuminant) depends on variation in key properties of naturalistic scenes. Specifically, we study how variation in target object reflectance, illumination spectra, and the reflectance of background objects in a scene impact estimation of a target object's LRF. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.19DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6314111PMC
December 2018

Individual differences in response precision correlate with adaptation bias.

J Vis 2018 Dec;18(13):18

Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.

The internal representation of stimuli is imperfect and subject to bias. Noise introduced at initial encoding and during maintenance degrades the precision of representation. Stimulus estimation is also biased away from recently encountered stimuli, a phenomenon known as adaptation. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.18DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6314105PMC
December 2018

The role of implicit perceptual-motor costs in the integration of information across graph and text.

J Vis 2018 Dec;18(13):16

Department of Psychology, Rutgers University, Piscataway, NJ, USA.

Strategies used to gather visual information are typically viewed as depending solely on the value of information gained from each action. A different approach may be required when actions entail cognitive effort or deliberate control. Integration of information across a graph and text is a resource-intensive task in which decisions to switch between graph and text may take into account the resources required to plan or execute the switches. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.16DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6314110PMC
December 2018
2 Reads

Spatial scaling of illusory motion perceived in a static figure.

J Vis 2018 Dec;18(13):15

Department of Life Sciences, the University of Tokyo, Tokyo, Japan.

In a phenomenon known as the Rotating Snakes illusion (Kitaoka & Ashida, 2003), illusory motion is perceived in a static figure with a specially designed luminance profile. It is known that the strength of this illusion increases with eccentricity, suggesting that the underlying mechanism of the illusion has a spatial property that changes with eccentricity. If a change in receptive-field size of responsible neurons causes the eccentricity dependence of the illusion, its strength should be spatially scalable using a scaling factor that increases with eccentricity, because the receptive field size of neurons in visual areas with retinotopy generally obeys quantitative dependence on eccentricity. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.15DOI Listing
December 2018
1 Read

Assessing the kaleidoscope of monocular deprivation effects.

J Vis 2018 Dec;18(13):14

Department of Psychology, University of Massachusetts Boston, Boston, MA, USA.

Short-term monocular deprivation (∼150 min) temporarily shifts sensory eye balance in favor of the deprived eye (Lunghi, Burr, & Morrone, 2011; Zhou, Clavagnier, & Hess, 2013), opposite to classic deprivation studies (Hubel & Wiesel, 1970). Various types of deprivation-light-tight, diffuser lenses, image degradation-have been tested, and it seemed that a deprivation of contrast was necessary, and sufficient, for these shifts. This could be accommodated in a feedforward model of binocular combination (Meese, Georgeson, & Baker, 2006; Sperling & Ding, 2010), in which the shift reflects a (persistent) reweighting induced by an interocular gain control mechanism tasked with maintaining binocular balance (Zhou, Clavagnier, et al. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.1
Publisher Site
http://dx.doi.org/10.1167/18.13.14DOI Listing
December 2018
7 Reads

Adaptation to dynamic faces produces face identity aftereffects.

J Vis 2018 Dec;18(13):13

ARC Centre of Excellence in Cognition and its Disorders, School of Psychological Science, The University of Western Australia, Crawley, Western Australia, Australia.

Face aftereffects are well established for static stimuli and have been used extensively as a tool for understanding the neural mechanisms underlying face recognition. It has also been argued that adaptive coding, as demonstrated by face aftereffects, plays a functional role in face recognition by calibrating our face norms to reflect current experience. If aftereffects tap high-level perceptual mechanisms that are critically involved in everyday face recognition then they should also occur for moving faces. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.1
Publisher Site
http://dx.doi.org/10.1167/18.13.13DOI Listing
December 2018
12 Reads

Comparing set summary statistics and outlier pop out in vision.

J Vis 2018 Dec;18(13):12

Loewenstein Rehabilitation Center, Raanana, Israel.

Visual scenes are too complex to perceive immediately in all their details. Two strategies (among others) have been suggested as providing shortcuts for evaluating scene gist before its details: (a) Scene summary statistics provide average values that often suffice for judging sets of objects and acting in their environment. Set summary perception spans simple/complex dimensions (circle size, face emotion), various statistics (mean, variance, range), and separate statistics for discernible sets. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.12DOI Listing
December 2018
1 Read

Anchoring visual search in scenes: Assessing the role of anchor objects on eye movements during visual search.

J Vis 2018 Dec;18(13):11

Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany.

The arrangement of the contents of real-world scenes follows certain spatial rules that allow for extremely efficient visual exploration. What remains underexplored is the role different types of objects hold in a scene. In the current work, we seek to unveil an important building block of scenes-anchor objects. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.11DOI Listing
December 2018

Insufficient compensation for self-motion during perception of object speed: The vestibular Aubert-Fleischl phenomenon.

J Vis 2018 Dec;18(13)

German Center for Vertigo and Balance Disorders (DSGZ), University Hospital of Munich, Ludwig Maximilian University, Munich, Germany.

To estimate object speed with respect to the self, retinal signals must be summed with extraretinal signals that encode the speed of eye and head movement. Prior work has shown that differences in perceptual estimates of object speed based on retinal and oculomotor signals lead to biased percepts such as the Aubert-Fleischl phenomenon (AF), in which moving targets appear slower when pursued. During whole-body movement, additional extraretinal signals, such as those from the vestibular system, may be used to transform object speed estimates from a head-centered to a world-centered reference frame. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.9
Publisher Site
http://dx.doi.org/10.1167/18.13.9DOI Listing
December 2018
8 Reads

The human visual system estimates angle features in an internal reference frame: A computational and psychophysical study.

J Vis 2018 Dec;18(13):10

Shanghai Key Laboratory of Brain Functional Genomics, Key Laboratory of Brain Functional Genomics, Ministry of Education, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.

Angle perception is an important middle-level visual process, combining line features to generate an integrated shape percept. Previous studies have proposed two theories of angle perception-a combination of lines and a holistic feature following Weber's law. However, both theories failed to explain the dual-peak fluctuations of the just-noticeable difference (JND) across angle sizes. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.1
Publisher Site
http://dx.doi.org/10.1167/18.13.10DOI Listing
December 2018
9 Reads

Vernier learning with short- and long-staircase training and its transfer to a new location with double training.

J Vis 2018 Dec;18(13)

School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, and Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China.

We previously demonstrated that perceptual learning of Vernier discrimination, when paired with orientation learning at the same retinal location, can transfer completely to untrained locations (Wang, Zhang, Klein, Levi, & Yu, 2014; Zhang, Wang, Klein, Levi, & Yu, 2011). However, Hung and Seitz (2014) reported that the transfer is possible only when Vernier is trained with short staircases, but not with very long staircases. Here we ran two experiments to examine Hung and Seitz's conclusions. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.8
Publisher Site
http://dx.doi.org/10.1167/18.13.8DOI Listing
December 2018
6 Reads

Inhibitory surrounds of motion mechanisms revealed by continuous tracking.

J Vis 2018 Dec;18(13)

Department of Neuroscience, University of Florence, Florence, Italy.

Continuous psychophysics is a newly developed technique that allows rapid estimation of visual thresholds by asking subjects to track a moving object, then deriving the integration window underlying tracking behavior (Bonnen, Burge, Yates, Pillow, & Cormack, 2015). Leveraging the continuous flow of stimuli and responses, continuous psychophysics allows for estimation of psychophysical thresholds in as little as 1 min. To date this technique has been applied only to tracking visual objects, where it has been used to measure localization thresholds. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.7DOI Listing
December 2018
1 Read

Relative contributions of low- and high-luminance components to material perception.

J Vis 2018 Dec;18(13)

Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi, Japan.

Besides specular highlights, image pixels that represent clues to recognizing the object material, such as shading between threads of fabrics, often yield relatively lower luminance in the image. Here, we psychophysically examined how lower and higher luminance components contribute to material perception. We created two types of luminance-modulated images-low- and high-luminance-preserved (LLP and HLP) images-and instructed observers to choose which modified image resulted in a material impression closer to the original. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.6DOI Listing
December 2018
1 Read

Beyond fixation durations: Recurrence quantification analysis reveals spatiotemporal dynamics of infant visual scanning.

J Vis 2018 Dec;18(13)

TALBY Study Team: Haiko Ballieux, Elena Kushnerenko, Mark. H. Johnson, Annette Karmiloff-Smith, Deirdre Birtles & Derek G. Moore.

Standard looking-duration measures in eye-tracking data provide only general quantitative indices, while details of the spatiotemporal structuring of fixation sequences are lost. To overcome this, various tools have been developed to measure the dynamics of fixations. However, these analyses are only useful when stimuli have high perceptual similarity and they require the previous definition of areas of interest (AOIs). Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.5
Publisher Site
http://dx.doi.org/10.1167/18.13.5DOI Listing
December 2018
1 Read

Naturally glossy: Gloss perception, illumination statistics, and tone mapping.

J Vis 2018 Dec;18(13)

Department of Computer Science and Technology, University of Cambridge, Cambridge, UK.

Recognizing materials and understanding their properties is very useful-perhaps critical-in daily life as we encounter objects and plan our interactions with them. Visually derived estimates of material properties guide where and with what force we grasp an object. However, the estimation of material properties, such as glossiness, is a classic ill-posed problem. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.4
Publisher Site
http://dx.doi.org/10.1167/18.13.4DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6279370PMC
December 2018
6 Reads

Semantic category priming from the groundside of objects shown in nontarget locations and at unpredictable times.

J Vis 2018 Dec;18(13)

Department of Psychology, University of Arizona, Tucson, AZ, USA.

Previous research demonstrated that familiar objects that are suggested, but not consciously perceived, on the groundside of the contours of a figure activate their semantic category during perceptual organization, at least when the figure appears at fixation at an expected time. Here, we investigate whether evidence for such semantic activation extends to stimuli presented at unpredictable times in peripheral locations. Participants categorized words shown centrally as denoting natural or artificial objects (Experiments 1 and 2a) or positive or negative concepts (Experiment 2b). Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.3
Publisher Site
http://dx.doi.org/10.1167/18.13.3DOI Listing
December 2018
1 Read

Deep learning-Using machine learning to study biological vision.

J Vis 2018 Dec;18(13)

Department of Psychology and Center for Neural Science, New York University, New York, NY, USA.

Many vision science studies employ machine learning, especially the version called "deep learning." Neuroscientists use machine learning to decode neural responses. Perception scientists try to understand how living organisms recognize objects. Read More

View Article

Download full-text PDF

Source
http://jov.arvojournals.org/article.aspx?doi=10.1167/18.13.2
Publisher Site
http://dx.doi.org/10.1167/18.13.2DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6279369PMC
December 2018
9 Reads

What image features guide lightness perception?

J Vis 2018 Dec;18(13)

Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada.

Lightness constancy is the ability to perceive black and white surface colors under a wide range of lighting conditions. This fundamental visual ability is not well understood, and current theories differ greatly on what image features are important for lightness perception. Here we measured classification images for human observers and four models of lightness perception to determine which image regions influenced lightness judgments. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.13.1DOI Listing
December 2018

Influence of head orientation on perceived gaze direction and eye-region information.

J Vis 2018 Nov;18(12):15

UNSW Sydney, Sydney, Australia.

Using synthetic 3D head and eye models, we examined the relationship between perceived gaze direction and the information within the image eye region across changes in head orientation. For each stimulus head and eye orientation, we rendered gray-scale images with realistic pigmentation and shading, and two-tone images depicting the regions corresponding to the iris, pupil, or eye-opening. Behavioural experiments using the gray-scale images as stimuli showed that perceived gaze direction was more strongly biased opposite to head orientation (repulsive effect) in the far-eye visible condition than in the near-eye visible condition. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.12.15DOI Listing
November 2018
12 Reads

The role of global cues in the perceptual grouping of natural shapes.

J Vis 2018 Nov;18(12):14

Centre for Vision Research, York University, Toronto, Canada.

Perceptual grouping of the bounding contours of objects is a crucial step in visual scene understanding and object recognition. The standard perceptual model for this task, supported by a convergence of physiological and psychophysical evidence, is based upon an association field that governs local grouping, and a Markov or transitivity assumption that allows global contours to be inferred solely from these local cues. However, computational studies suggest that these local cues may not be sufficient for reliable identification of object boundaries in natural scenes. Read More

View Article

Download full-text PDF

Source
http://dx.doi.org/10.1167/18.12.14DOI Listing
November 2018
6 Reads