We demonstrate how combining the reasoning componentsfrom two existing systems designed for human-robot joint actionproduces an integrated system with greater capabilities than eitherof the individual systems. One of the systems supports primarilynon-verbal interaction and uses dynamic neural fields to infer theuser’s goals and to suggest appropriate system responses; the otheremphasises natural-language interaction and uses a dialogue managerto process user input and select appropriate system responses.Combining these two methods of reasoning results in a robot that isable to coordinate its actions with those of the user while employinga wide range of verbal and non-verbal communicative actions.
展开▼