首页> 外文会议>IEEE International Conference on Systems, Man, and Cybernetics >Auditory-aware navigation for mobile robots based on reflection-robust sound source localization and visual SLAM
【24h】

Auditory-aware navigation for mobile robots based on reflection-robust sound source localization and visual SLAM

机译:基于稳健的反射声源定位和视觉SLAM的移动机器人听觉感知导航

获取原文

摘要

Autonomous robot navigation using Simultaneous localization and mapping (SLAM) is essential for scene understanding by robots. Most existing systems use visual information, and even though such visual-based technologies are robust and useful for many situations, they have difficulty dealing with certain scenarios such as occluded goal or when the goal is out of frame. Introducing audio information to the navigation system solves these issues effectively. Several audio-based methods have been developed in the past to deal with these issues. However, these existing audio-based methods have been developed with the assumption that the space around the robot is open i.e. no sound reflection occurs. Hence, the invisible goals where the sound reflection can be localized have not been fully considered. This paper proposes a reflection-robust sound source localization method using visual SLAM. This method can deal with sound sources whose direct paths are not available, and using localization, we can set a goal only for the actual sound source. Also, to correct the drift present in the local estimates of Visual Odometry (VO), SLAM was integrated with the system, thus increasing the accuracy and robustness of mapping and navigation of our proposed method. The performance of the proposed system is compared to conventional methods, and it proves to be very efficient and robust especially in extremely reverberant situations. It was found that the integration of VO and SLAM improved the average error of a map by approximately 50 pts, and the accuracy of SSL for direct path of sounds was improved by approximately 8 pts. With the online implementation of these methods we successfully achieved audio visual navigation for the actual sound sources.
机译:使用同时本地化和映射(SLAM)的自主机器人导航对于机器人的场景理解至关重要。大多数现有系统使用可视信息,即使这种基于视觉的技术对于许多情况具有强大而有用,它们难以处理某些方案,例如遮挡目标或目标超出框架。向导航系统引入音频信息有效解决这些问题。过去已经开发了几种基于音频的方法来处理这些问题。然而,已经通过假设机器人周围的空间打开的假设来开发了这些现有的基于音频的方法。没有发生声音反射。因此,尚未得到完全考虑声音反射的无形目标。本文提出了一种使用Visual Slam的反射稳固声源定位方法。此方法可以处理其直接路径不可用的声源,并使用本地化,我们可以仅为实际声音源设置目标。此外,为了校正视觉径管(VO)的局部估计中存在的漂移,与系统集成了SLAM,从而提高了我们提出的方法的映射和导航的精度和稳健性。将所提出的系统的性能与常规方法进行比较,并且证明是非常有效和稳健的,特别是在极其混响的情况下。发现Vo和Slam的集成改善了地图的平均误差约为50pts,并且声音直接路径的SSL的精度得到了大约8分。通过这些方法的在线实现,我们成功实现了实际声音源的视觉视觉导航。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号