首页> 外文期刊>International journal of machine learning and cybernetics >Multi-agent reinforcement learning for redundant robot control in task-space
【24h】

Multi-agent reinforcement learning for redundant robot control in task-space

机译:任务空间中冗余机器人控制的多功能辅助钢筋

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Task-space control needs the inverse kinematics solution or Jacobian matrix for the transformation from task space to joint space. However, they are not always available for redundant robots because there are more joint degrees-of-freedom than Cartesian degrees-of-freedom. Intelligent learning methods, such as neural networks (NN) and reinforcement learning (RL) can learn the inverse kinematics solution. However, NN needs big data and classical RL is not suitable for multi-link robots controlled in task space. In this paper, we propose a fully cooperative multi-agent reinforcement learning (MARL) to solve the kinematic problem of redundant robots. Each joint of the robot is regarded as one agent. The fully cooperative MARL uses a kinematic learning to avoid function approximators and large learning space. The convergence property of the proposed MARL is analyzed. The experimental results show that our MARL is much more better compared with the classic methods such as Jacobian-based methods and neural networks.
机译:任务空间控制需要从任务空间转换到联合空间的逆运动学解决方案或Jacobian矩阵。然而,它们并不总是可用于冗余机器人,因为与笛卡尔自由度有更多的联合自由度。智能学习方法,如神经网络(NN)和强化学习(RL)可以学习反向运动学解决方案。但是,NN需要大数据,经典RL不适用于在任务空间中控制的多链路机器人。在本文中,我们提出了一个完全合作的多智能体增强学习(Marl)来解决冗余机器人的运动问题。机器人的每个接头被视为一个代理人。完全合作的Marl使用运动学学习来避免功能近似器和大型学习空间。分析了所提出的MARL的收敛性。实验结果表明,与基于Jacobian的方法和神经网络等经典方法相比,我们的Marl更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号