Abstract
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes.
Original language | English |
---|---|
Pages (from-to) | 1490-1511 |
Number of pages | 22 |
Journal | Journal of Experimental Psychology-Learning Memory and Cognition |
Volume | 38 |
Issue number | 6 |
DOIs | |
Publication status | Published - Nov 2012 |