首页> 外文会议>International work-conference on the interplay between natural and artificial computation >Neural Modeling of Hose Dynamics to Speedup Reinforcement Learning Experiments
【24h】

Neural Modeling of Hose Dynamics to Speedup Reinforcement Learning Experiments

机译:软管动力学的神经建模,以加快强化学习实验的速度

获取原文

摘要

Two main practical problems arise when dealing with autonomous learning of the control of Linked Multi-Component Robotic Systems (L-MCRS) with Reinforcement Learning (RL): time and space consumption, due to the convergence conditions of the RL algorithm applied, i.e. Q-Learning algorithm, and the complexity of the system model. Model approximate response allows to speedup the realization of RL experiments. We have used a multivariate regression approximation model based on Artificial Neural Networks (ANN), which has achieved a 90% and 27% of time and space savings compared to the conventional Geometrically Exact Dynamic Splines (GEDS) model.
机译:当使用强化学习(RL)处理链接多组件机器人系统(L-MCRS)的控制的自主学习时,会出现两个主要的实际问题:由于所应用的RL算法的收敛条件,即时间和空间消耗,即Q学习算法,以及系统模型的复杂性。模型近似响应可加快RL实验的实现。我们使用了基于人工神经网络(ANN)的多元回归逼近模型,与传统的几何精确动态样条线(GEDS)模型相比,该模型已节省了90%和27%的时间和空间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号