首页> 外文期刊>Journal of Basic and Applied Physics >Application of Actor-Critic Method to Mobile Robot Using State Representation Based on Probability Distributions
【24h】

Application of Actor-Critic Method to Mobile Robot Using State Representation Based on Probability Distributions

机译:Actor-Critic方法在基于概率分布的状态表示的移动机器人中的应用

获取原文
           

摘要

In this study, I applied an actor-critic learning method to a mobile robot which uses a state representation based on distances between probability distributions. This state representation is proposed in a previous work and is insensitive to environmental changes, i.e., sensor signals maintaining an identical state even under certain environmental changes. The method, which constitutes a reinforcement learning algorithm, can handle continuous states and action spaces. I performed a simulation and verified that the mobile robot can learn a wall-following task. Then, I confirmed that the learned robot can achieve the same task when its sensors are artificially changed.
机译:在这项研究中,我将一种行为者批判学习方法应用于一种移动机器人,该机器人使用基于概率分布之间距离的状态表示。这种状态表示是在先前的工作中提出的,并且对环境变化不敏感,即,即使在某些环境变化下,传感器信号也保持相同的状态。该方法构成强化学习算法,可以处理连续状态和动作空间。我进行了仿真,并验证了移动机器人可以学习跟随墙壁的任务。然后,我确认学习到的机器人在人为更改传感器时可以完成相同的任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号