首页> 外文会议>Annual Conference on Towards Autonomous Robotic Systems >Learning Objects from RGB-D Sensors for Cleaning Tasks Using a Team of Cooperative Humanoid Robots
【24h】

Learning Objects from RGB-D Sensors for Cleaning Tasks Using a Team of Cooperative Humanoid Robots

机译:从RGB-D传感器学习对象,用于使用一个合作的人形机器人团队清洁任务

获取原文
获取外文期刊封面目录资料

摘要

In this work, we address the problem of implementing cooperative search in humanoid robots (NAOs). The robots are taught to recognise a number of objects and then use their RGB-D sensors (attached to their heads) to search their environment for these objects. When an object is found they have to move to the target position and to perform a cleaning task in these objects (also, the location of recognized objects can help navigation). The challenge is threefold: 1) navigation/exploration, 2) real-time object recognition and 3) cooperation. This work will show preliminary results in object recognition and briefly discuss the approaches that will be employed for the entire system. The scenario consists of a room in which some objects are spread (Figure 1 on the left). Initially, the robots (Figure 1 on the right) explore the environment until their RGB-D devices have detected objects. Then, the robots move to the objects, constantly trying to identify them. It is an assumption of our approach that these objects are located on tables or plane surfaces, so this strategy will help to detect potential objects before recognizing them. The robot has to extract the appropriate information from the point cloud, filter noise and correctly segment the objects. We are using RANSAC (RAndom SAmple Consensus) [1] to identify planes and VFH (View Point Feature Histogram) to collect a multidimensional descriptor (feature vector with 308 elements) that characterizes the object. The robots are, initially, completely unaware about the object's position. Thus, they cannot plan the best way to cooperatively distribute themselves in the environment, but they can model the influence of other robots as repulsive potential fields, in a purely reactive, though collaborative, way. Although a navigation approach based on RGB-D data is intended to be used, initially we will employ a ceiling camera (also used in [2]) that provides a global view of the environment.
机译:在这项工作中,我们解决了在人形机器人(NAOS)中实施合作搜索的问题。机器人被教导识别许多对象,然后使用其RGB-D传感器(连接到其头部)来搜索它们的环境以获取这些对象。当发现对象时,它们必须移动到目标位置并在这些对象中执行清洁任务(此外,所识别的对象的位置可以帮助导航)。挑战是三倍:1)导航/探索,2)实时对象识别和3)合作。这项工作将显示对象识别的初步结果,并简要讨论将用于整个系统的方法。场景由一个空间分布的房间组成(左侧图1)。最初,机器人(右侧的图1)探索环境,直到其RGB-D设备检测到对象。然后,机器人移动到对象,不断尝试识别它们。这是我们的方法,即这些物体位于表或平面表面上,因此该策略将有助于在识别出识别之前检测潜在的物体。机器人必须从点云,滤波器噪声,正确段分段对象中提取适当的信息。我们正在使用Ransac(随机样本共识)[1]来识别平面和VFH(视点特征直方图)来收集特征对象的多维描述符(具有308个元素的特征向量)。最初,机器人完全没有意识到对象的位置。因此,他们无法计划在环境中合作分发自己的最佳方法,但它们可以在纯粹的反应性的情况下模拟其他机器人作为排斥潜在领域的影响,虽然是合作的方式。尽管旨在使用基于RGB-D数据的导航方法,但最初我们将采用天花板摄像头(也用于[2]),该摄像机提供全局环境视图。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号