Joint goal human robot collaboration-from remembering to inferring

Vishwanathan Mohan, Ajaz Ahmad Bhat

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
19 Downloads (Pure)

Abstract

The ability to infer goals, consequences of one’s own and others’ actions is a critical desirable feature for robots to truly become our companions-thereby opening up applications in several domains. This article proposes the viewpoint that the ability to remember our own past experiences based on present context enables us to infer future consequences of both our actions/goals and observed actions/goals of the other (by analogy). In this context, a biomimetic episodic memory architecture to encode diverse learning experiences of iCub humanoid is presented. The critical feature is that partial cues from the present environment like objects perceived or observed actions of a human triggers a recall of context relevant past experiences thereby enabling the robot to infer rewarding future states and engage in cooperative goal-oriented behaviors. An assembly task jointly done by human and the iCub humanoid is used to illustrate the framework. Link between the proposed framework and emerging results from neurosciences related to shared cortical basis for ‘remembering, imagining and perspective taking’ is discussed.
Original languageEnglish
Pages (from-to)579-584
Number of pages6
JournalProcedia Computer Science
Volume123
Early online date3 Feb 2018
DOIs
Publication statusPublished - 2018

Cite this