首页> 中文期刊> 《自动化学报:英文版》 >Reinforcement Learning Transfer Based on Subgoal Discovery and Subtask Similarity

Reinforcement Learning Transfer Based on Subgoal Discovery and Subtask Similarity

         

摘要

This paper studies the problem of transfer learning in the context of reinforcement learning.We propose a novel transfer learning method that can speed up reinforcement learning with the aid of previously learnt tasks.Before performing extensive learning episodes,our method attempts to analyze the learning task via some exploration in the environment,and then attempts to reuse previous learning experience whenever it is possible and appropriate.In particular,our proposed method consists of four stages:1) subgoal discovery,2) option construction,3) similarity searching,and 4) option reusing.Especially,in order to fulfill the task of identifying similar options,we propose a novel similarity measure between options,which is built upon the intuition that similar options have similar stateaction probabilities.We examine our algorithm using extensive experiments,comparing it with existing methods.The results show that our method outperforms conventional non-transfer reinforcement learning algorithms,as well as existing transfer learning methods,by a wide margin.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号