首页> 外文会议>Conference on Unmanned Systems Technology XVIII >A Multimodal Interface for Real-Time Soldier-Robot Teaming
【24h】

A Multimodal Interface for Real-Time Soldier-Robot Teaming

机译:实时士兵机器人组合的多模式界面

获取原文

摘要

Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems become increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.
机译:最近的机器人研究和进步导致了新颖平台的开发,利用了语义导航的新传感能力。由于这些系统变得越来越强劲,因此他们支持高度复杂的命令,超出直接漫步和航点寻找促进从机器人的转变为机器人作为队友的工具。支持未来的士兵 - 机器人合作需要与人类团队成功整合机器人的通信能力。因此,随着机器人的功能增加,它同样重要的是,士兵和机器人之间的界面也是如此。多模式通信(MMC)使人机机器人能够通过冗余和通信水平而不是单模式交互来组织。用于智能手机和游戏的近年来近年来发布的商业现货(COTS)技术提供了通过使用语音,手势和视觉显示器创建融合MMC的便携式接口的工具。但是,对于在军事领域成功使用的多模式接口,他们必须能够以高精度实时对语音,手势和过程自然语言进行分类。对于本研究,开发了一种支持与自主机器人的实时交互的原型多模式界面。该设备集成了COTS自动语音识别(ASR),自定义手势识别手套,以及在平板电脑上了解的自然语言理解。本文提出了集成设备的绩效结果(例如响应时间,准确性)在命令自主机器人中,以在未知的室外环境中执行侦察和监视活动。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号