...
首页> 外文期刊>Autonomous robots >Environment-adaptive interaction primitives through visual context for human-robot motor skill learning
【24h】

Environment-adaptive interaction primitives through visual context for human-robot motor skill learning

机译:通过用于人机运动技能学习的视觉背景环境 - 适应性相互作用原语

获取原文
获取原文并翻译 | 示例

摘要

In situations where robots need to closely co-operate with human partners, consideration of the task combined with partner observation maintains robustness when partner behavior is erratic or ambiguous. This paper documents our approach to capture human-robot interactive skills by combining their demonstrative data with additional environmental parameters automatically derived from observation of task context without the need for heuristic assignment, as an extension to overcome shortcomings of the interaction primitives framework. These parameters reduce the partner observation period required before suitable robot motion can commence, while also enabling success in cases where partner observation alone was inadequate for planning actions suited to the task. Validation in a collaborative object covering exercise with a humanoid robot demonstrate the robustness of our environment-adaptive interaction primitives, when augmented with parameters directly drawn from visual data of the task scene.
机译:在机器人需要与人类合作伙伴密切合作的情况下,考虑到合作伙伴观察的任务使伴侣行为不稳定或含糊不清的鲁棒性。本文通过将其说明性数据与其他环境参数组合,通过从任务环境观察的额外环境参数组合来捕获人机互动技能的方法,而无需启发式分配,作为克服交互原语框架的缺点的扩展,可以自动导出的额外环境参数。这些参数减少了合适的机器人运动开始前所需的合作伙伴观察期间,同时在单独合作伙伴观察的情况下也能够取得成功,以便规划适合任务的行动。使用人形机器人覆盖锻炼的协同对象中的验证展示了我们的环境 - 自适应相互作用基元的稳健性,当使用从任务场景的视觉数据直接汲取的参数增强。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号