首页> 外文会议>International Symposium on Robotics Research >A Multiview Approach to Learning Articulated Motion Models
【24h】

A Multiview Approach to Learning Articulated Motion Models

机译:学习铰接运动模型的多视图方法

获取原文

摘要

As robots move off factory floors and into our homes and workplaces, they face the challenge of interacting with the articulated objects frequently found in environments built by and for humans (e.g., drawers, ovens, refrigerators, and faucets). Typically, this interaction is predefined in the form of a manipulation policy that must be (manually) specified for each object that the robot is expected to interact with. Such an approach may be reasonable for robots that interact with a small number of objects, but human environments contain a large number of diverse objects. In an effort to improve efficiency and generalizability, recent work employs visual demonstrations to learn representations that describe the motion of these parts in the form of kinematic models that express the rotational, prismatic, and rigid relationships between object parts. These structured object-relative models, which constrain the object's motion manifold, are suitable for trajectory controllers, provide a common representation amenable to transfer between objects, and allow for manipulation policies that are more efficient and deliberate than reactive policies (Fig. 1). However, such visual cues may be too time-consuming to provide or may not be readily available, such as when a user is remotely commanding a robot over a bandwidth-limited channel (e.g., for disaster relief). Further, reliance solely on vision makes these methods sensitive to common errors in data association, object segmentation, and tracking (e.g., tracking features over time and associating them with the correct object part) that occur as a result of clutter, occlusions, and a dearth of visual features. Consequently, most existing systems require scenes to be free of distractors and that object parts be labeled with fiducial markers.
机译:随着机器人离开工厂地板以及进入我们的家园和工作场所,它们面临与常规的挑战,这些物体经常在由人类(例如,抽屉,烤箱,冰箱和水龙头)建造的环境中经常发现。通常,这种交互是以操纵策略的形式预定义的,该操作必须(手动地)为每个对象指定的机器人预期与之交互。这种方法对于与少量对象交互的机器人来说可能是合理的,但人类环境包含大量不同的物体。为了提高效率和普遍性,最近的工作采用视觉演示来学习描述这些部件以表达物体部件之间的旋转,棱柱形和刚性关系的运动模型的形式的表示。这些结构化对象相对模型,其限制对象的运动歧管,适用于轨迹控制器,提供常见的表示,其可在物体之间传递,并且允许比反应政策更有效和刻意的操纵策略(图1)。然而,这种视觉提示可能太耗时,以提供或不容易获得,例如当用户在带宽限制信道上远程命令机器人时(例如,用于救灾)。此外,完全对视觉的依赖性使得这些方法对数据关联,对象分割和跟踪(例如,随时间跟踪特征以及将它们与正确的对象部分相关联的常见误差敏感(例如,与正确的对象部分相关联),这是杂乱,闭塞和a的正确对象部分。缺乏视觉特征。因此,大多数现有系统都需要场景不含牵引器,并且该物体部件用基准标记标记。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号