首页> 外文期刊>IEEE Robotics and Automation Letters >Efficient Robotic Object Search Via HIEM: Hierarchical Policy Learning With Intrinsic-Extrinsic Modeling
【24h】

Efficient Robotic Object Search Via HIEM: Hierarchical Policy Learning With Intrinsic-Extrinsic Modeling

机译:有效的机器人对象搜索通过HIEM:具有内在外在建模的分层策略学习

获取原文
获取原文并翻译 | 示例
           

摘要

Despite the significant success at enabling robots with autonomous behaviors makes deep reinforcement learning a promising approach for robotic object search task, the deep reinforcement learning approach severely suffers from the nature sparse reward setting of the task. To tackle this challenge, we present a novel policy learning paradigm for the object search task, based on hierarchical and interpretable modeling with an intrinsic-extrinsic reward setting. More specifically, we explore the environment efficiently through a proxy low-level policy which is driven by the intrinsic rewarding sub-goals. We further learn our hierarchical policy from the efficient exploration experience where we optimize both of our high-level and low-level policies towards the extrinsic rewarding goal to perform the object search task well. Experiments conducted on the House3D environment validate and show that the robot, trained with our model, can perform the object search task in a more optimal and interpretable way.
机译:尽管在实现自主行为的机器人方面取得了重大成功使得深度加固学习有希望的机器人对象搜索任务的方法,但深度加强学习方法严重遭受了任务的自然稀疏奖励设置。为了解决这一挑战,我们为对象搜索任务提供了一种新的策略学习范例,基于具有内在外在奖励设置的分层和可解释的建模。更具体地,我们通过由内在奖励子目标驱动的代理低级策略有效地探索环境。我们进一步了解我们的分层政策,从高效的探索经验中,我们优化我们的高级和低级策略,以实现对象搜索任务的外在奖励目标。在House3D环境上进行的实验验证并显示使用我们模型培训的机器人可以以更优化和可解释的方式执行对象搜索任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号