...
首页> 外文期刊>Robotica >Self-reproduction for articulated behaviors with dual humanoid robots using on-line decision tree classification
【24h】

Self-reproduction for articulated behaviors with dual humanoid robots using on-line decision tree classification

机译:利用双重人形机器人通过在线决策树分类对关节行为进行自我复制

获取原文
获取原文并翻译 | 示例
           

摘要

We have proposed a new repetition framework tor vision based behavior imitation by a sequence of multiple humanoid robots, introducing an on-line method for delimiting a time-varying context. This novel approach investigates the ability of a robot "student" to observe and imitate a behavior from a "teacher" robot; the student later changes roles to become the "teacher" for a naive robot. For the many robots that already use video acquisition systems for their real-world tasks, this method eliminates the need for additional communication capabilities and complicated interfaces. This can reduce human intervention requirements and thus enhance the robots' practical usefulness outside the laboratory. Articulated motions are modeled in a three-layer method and registered as learned behaviors using color-based landmarks. Behaviors were identified on-line after each iteration by inducing a decision tree from the visually acquired data. Error accumulated over time, creating a context drift for behavior identification. In addition, identification and transmission of behaviors can occur between robots with differing, dynamically changing configurations. ITI, an on-line decision tree inducer in the C4.5 family, performed well for data that were similar in time and configuration to the training data but the greedily chosen attributes were not optimized for resistance to accumulating error or configuration changes. Our novel algorithm, OLDEX identified context changes on-line, as well as the amount of drift that could be tolerated before compensation was required. OLDEX can thus identify time and configuration contexts for the behavior data. This improved on previous methods, which either separated contexts off-line, or could not separate the slowly time-varying context into distinct regions at all. The results demonstrated the feasibility, usefulness, and potential of our unique idea for behavioral repetition and a propagating learning scheme.
机译:我们已经提出了一个新的重复框架,以通过多个类人机器人的序列基于视觉的行为模仿,引入了一种用于定义时变上下文的在线方法。这种新颖的方法研究了机器人“学生”观察和模仿“老师”机器人行为的能力。然后,学生改变了角色,成为了一个幼稚机器人的“老师”。对于已经将视频采集系统用于实际任务的许多机器人,这种方法消除了对附加通信功能和复杂接口的需求。这样可以减少人工干预的需求,从而提高机器人在实验室外的实用性。用三层方法对关节运动进行建模,并使用基于颜色的界标将其记录为学习行为。通过从视觉获取的数据中引入决策树,在每次迭代后在线识别行为。错误会随着时间的推移而累积,从而导致上下文漂移来识别行为。此外,在具有不同的动态变化配置的机器人之间可以进行行为的识别和传递。 ITI(C4.5系列的在线决策树诱导程序)在时间和配置上与训练数据相似的数据上表现良好,但贪婪选择的属性并未针对累积误差或配置更改进行优化。我们的新颖算法OLDEX可在线识别上下文变化,以及在需要补偿之前可以容忍的漂移量。因此,OLDEX可以识别行为数据的时间和配置上下文。这对以前的方法进行了改进,这些方法要么离线分离上下文,要么根本无法将时变缓慢的上下文完全分离到不同的区域。结果证明了我们关于行为重复和传播学习方案的独特思想的可行性,实用性和潜力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号