%0 Conference Proceedings %A Lallée, S. %A Hamann, K. %A Steinwender, J. %A Warneken, F. %A Martienz, U. %A Barron-Gonzales, H. %A Pattacini, U. %A Gori, I. %A Petit, M. %A Metta, G. %A Verschure, P. %A Dominey, P. F. %+ Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society %T Cooperative human robot interaction systems: IV. Communication of shared plans with Naive humans using gaze and speech : %G eng %U https://hdl.handle.net/11858/00-001M-0000-002E-9C72-C %R 10.1109/IROS.2013.6696343 %D 2013 %Z Review method: peer-reviewed %B 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) %Z date of event: 2013-11-03 - 2013-11-07 %C Tokyo, Japan %X Cooperation1 is at the core of human social life. In this context, two major challenges face research on humanrobot interaction: the first is to understand the underlying structure of cooperation, and the second is to build, based on this understanding, artificial agents that can successfully and safely interact with humans. Here we take a psychologically grounded and human-centered approach that addresses these two challenges. We test the hypothesis that optimal cooperation between a naïve human and a robot requires that the robot can acquire and execute a joint plan, and that it communicates this joint plan through ecologically valid modalities including spoken language, gesture and gaze. We developed a cognitive system that comprises the human-like control of social actions, the ability to acquire and express shared plans and a spoken language stage. In order to test the psychological validity of our approach we tested 12 naïve subjects in a cooperative task with the robot. We experimentally manipulated the presence of a joint plan (vs. a solo plan), the use of task-oriented gaze and gestures, and the use of language accompanying the unfolding plan. The quality of cooperation was analyzed in terms of proper turn taking, collisions and cognitive errors. Results showed that while successful turn taking could take place in the absence of the explicit use of a joint plan, its presence yielded significantly greater success. One advantage of the solo plan was that the robot would always be ready to generate actions, and could thus adapt if the human intervened at the wrong time, whereas in the joint plan the robot expected the human to take his/her turn. Interestingly, when the robot represented the action as involving a joint plan, gaze provided a highly potent nonverbal cue that facilitated successful collaboration and reduced errors in the absence of verbal communication. These results support the cooperative stance in human social cogn- tion, and suggest that cooperative robots should employ joint plans, fully communicate them in order to sustain effective collaboration while being ready to adapt if the human makes a midstream mistake. %K artificial agents, cognitive architecture, cognitive errors, cognitive system, cognitive systems, Collision avoidance, cooperation, cooperative human robot interaction systems, gaze, gesture recognition, HRI, human-centered approach, human-like control, humanoid robots, human-robot interaction, human social life, joint plan, Joints, naïve humans, Pragmatics, Psychology, Robot kinematics, shared intention, shared plan communication, social actions, Speech, speech processing, spoken language, task-oriented gaze, task-oriented gestures, turn taking %B 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems %P 129 - 136 %@ 978-1-4673-6358-7