Abstract
Our visual system provides a distance-invariant percept of object size by integrating retinal image size with viewing distance (size constancy). Single-unit studies with animals have shown that some distance cues, especially oculomotor cues such as vergence and accommodation, can modulate the signals in the thalamus or V1 at the initial processing stage [1, 2, 3, 4, 5, 6, 7]. Accordingly, one might predict that size constancy emerges much earlier in time [8, 9, 10], even as visual signals are being processed in the thalamus. So far, the studies that have looked directly at size coding have either used fMRI (poor temporal resolution [11, 12, 13]) or relied on inadequate stimuli (pictorial illusions presented on a monitor at a fixed distance [11, 12, 14, 15]). Here, we physically moved the monitor to different distances, a more ecologically valid paradigm that emulates what happens in everyday life and is an example of the increasing trend of “bringing the real world into the lab.” Using this paradigm in combination with electroencephalography (EEG), we examined the computation of size constancy in real time with real-world viewing conditions. Our study provides strong evidence that, even though oculomotor distance cues have been shown to modulate the spiking rate of neurons in the thalamus and in V1, the integration of viewing distance cues and retinal image size takes at least 150 ms to unfold, which suggests that the size-constancy-related activation patterns in V1 reported in previous fMRI studies (e.g., [12, 13]) reflect the later processing within V1 and/or top-down input from other high-level visual areas.
Original language | English |
---|---|
Pages (from-to) | 2237-2243.e4 |
Number of pages | 7 |
Journal | Current Biology |
Volume | 29 |
Issue number | 13 |
Early online date | 27 Jun 2019 |
DOIs | |
Publication status | Published - 8 Jul 2019 |