Efficient visually guided behavior depends on the ability to form, retain, and compare visual representations for objects that may be separated in space and time. This ability relies on a short-term form of memory known as visual working memory. Although a considerable body of research has begun to shed light on the neurocognitive systems subserving this form of memory, few theories have addressed these processes in an integrated, neurally plausible framework. We describe a layered neural architecture that implements encoding and maintenance, and links these processes to a plausible comparison process. In addition, the model makes the novel prediction that change detection will be enhanced when metrically similar features are remembered. Results from experiments probing memory for color and for orientation were consistent with this novel prediction. These findings place strong constraints on models addressing the nature of visual working memory and its underlying mechanisms.
|Number of pages||8|
|Publication status||Published - May 2009|