首页> 外文期刊>Journal of robotics and mechatronics >Expression of Continuous State and Action Spaces for Q-Learning Using Neural Networks and CMAC
【24h】

Expression of Continuous State and Action Spaces for Q-Learning Using Neural Networks and CMAC

机译:使用神经网络和CMAC进行Q学习的连续状态和动作空间的表达

获取原文
获取原文并翻译 | 示例
           

摘要

This paper proposes a new reinforcement learning algorithm that can learn, using neural networks and CMAC, a mapping function between high-dimensional sensors and the motors of an autonomous robot. Conventional reinforcement learning algorithms require a lot of memory because they use lookup tables to describe high-dimensional mapping functions. Researchers have therefore tried to develop reinforcement learning algorithms that can learn the high-dimensional mapping functions. We apply the proposed method to an autonomous robot navigation problem and a multi-link robot arm reaching problem, and we evaluate the effectiveness of the method.
机译:本文提出了一种新的强化学习算法,该算法可以使用神经网络和CMAC学习高维传感器与自主机器人的电动机之间的映射函数。常规强化学习算法需要大量内存,因为它们使用查找表来描述高维映射函数。因此,研究人员试图开发可以学习高维映射功能的强化学习算法。我们将提出的方法应用于自主机器人导航问题和多链接机器人手臂伸手问题,并评估了该方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号