首页> 外文期刊>IFAC PapersOnLine >Hand gesture recognition from multibeam sonar imagery
【24h】

Hand gesture recognition from multibeam sonar imagery

机译:多光束声纳图像中的手势识别

获取原文
获取外文期刊封面目录资料

摘要

Abstract: Divers perform demanding tasks in a complex and hazardous underwater environment, which prevents them from carrying special devices that may allow them to communicate with their robotic diving buddies. In this world of natural human-robot interaction in the underwater environment, envisioned by the FP7 Cognitive Robotics project CADDY, hand detection and gesture interpretation is a prerequisite. While hand gesture recognition is most often performed with cameras (mono and stereo), their use in the underwater environment is compromised due to water turbidity and lack of sunlight at greater depths. This paper deals with this lack of performance by introducing the concept of using high resolution multibeam sonars (often referred to as acoustic cameras) for diver hand gesture recognition. In order to ensure reliable communication between the diver and the robot, it is of great importance that the classification precision is as high as possible. This paper presents results of hand gesture recognition which is performed by using two approaches: convex hull method and the support vector machine (SVM). A novel approach that fuses the two methods is introduced as a way of increasing the precision of classification. The results obtained on more than 1000 real sonar samples show that the precision using the convex hull method is around 92%, and using the SVM around 94%, while fusing the two approaches provides around 99% classification precision.
机译:摘要:潜水员在复杂而危险的水下环境中执行艰巨的任务,从而阻止他们携带可能使他们与机器人潜水伙伴进行通讯的特殊设备。在FP7认知机器人项目CADDY的设想下,在水下环境中人机交互的自然世界中,手部检测和手势解释是前提。尽管手势识别最常使用相机(单声道和立体声)执行,但由于水混浊和在更深的地方缺少阳光,因此在水下环境中的手势使用受到限制。本文通过介绍使用高分辨率多光束声纳(通常称为声学相机)进行潜水员手势识别的概念来解决这种性能不足的问题。为了确保潜水员和机器人之间的可靠通信,非常重要的一点是分类精度要尽可能高。本文介绍了手势识别的结果,这是通过两种方法执行的:凸包方法和支持向量机(SVM)。作为提高分类精度的一种方法,引入了一种融合这两种方法的新颖方法。在1000多个真实声纳样本上获得的结果表明,使用凸包法的精度约为92%,使用SVM的精度约为94%,而将这两种方法融合在一起可提供约99%的分类精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号