首页> 美国政府科技报告 >Multi-modal Interfacing for Human-Robot Interaction
【24h】

Multi-modal Interfacing for Human-Robot Interaction

机译:人机交互的多模接口

获取原文

摘要

Conclusions: (1) By using 'context predicates' we track actions occurring during a dialog to determine which goals (event and locative) have been achieved or attained and which have not; (2) By tracking 'context predicates' we can determine what actions need to be acted upon next; i.e., predicates in the stack that have not been completed; (3) 'Locative' expressions, e.g. 'there,' give us a kind of handle in command and control applications to attempt error correction when locative goals are being discussed; (4) By interleaving complex dialog with natural and mechanical gestures, we hope to achieve dynamic autonomy and an integrated multi-modal interface.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号