Much of image processing and vision uses 3 sensors because we ourselves have trichromatic vision. However, some solutions to vision problems use additional hardware not available to ourselves in order to get the best performance. For example, the idea of placing a coloured filter in front of a vision system, and capturing two images: one with-and another one without the filter has recently been revisited in the context of the illumination estimation problem. It was shown that the plus-filter illuminant estimation is easier to solve and so it is easier to estimate and discount illumination compared with traditional methods. In this paper we propose that this two image approach is in fact plausible for human visual processing. Indeed, a little known and little explained optical fact is that the light striking the central part of the retina is pre-filtered with a yellow filter (the macular pigment) but no pre-filtering occurs to light striking other parts of the retina. Since the world we see is a composite of several images taken at different fixation points we argue that the visual system in principle has access to two images: normal RGBs + RGBs measured through a yellow filter. Experiments demonstrate that the illumination estimation performance that is possible starting with the human vision system sensitivities is excellent and as for machine vision is much better than the non plus-filter algorithms. We discuss the wider relevance of this result to the vision and imaging community.
|Number of pages||7|
|Publication status||Published - Apr 2005|
|Event||IEE International Conference on Visual Information Engineering - Glasgow, Scotland|
Duration: 4 Apr 2005 → 6 Apr 2005
|Conference||IEE International Conference on Visual Information Engineering|
|Period||4/04/05 → 6/04/05|