首页> 外文会议>Conference on empirical methods in natural language processing >Speeding up Reinforcement Learning-based Information Extraction Training using Asynchronous Methods
【24h】

Speeding up Reinforcement Learning-based Information Extraction Training using Asynchronous Methods

机译:使用异步方法加快加强基于学习的基于学习的信息提取训练

获取原文

摘要

RLIE-DQN is a recently proposed Reinforcement Learning-based Information Extraction (IE) technique which is able to incorporate external evidence during the extraction process RLIE-DQN trains a single agent sequentially, training on one instance at a time. This results in significant training slowdown which is undesirable We leverage recent advances in parallel RL training using asynchronous methods and propose RLIE-A3C. RLIE-A3C trains multiple agents in parallel and is able to achieve upto 6x training speedup over RLIE-DQN, while suffering no loss in average accuracy.
机译:RLIE-DQN是最近提出的基于加强学习的信息提取(即)技术能够在提取过程中能够在提取过程中依次列举单个代理,在一个实例上训练。这导致显着的训练放缓,这是不希望的,我们利用异步方法利用并行RL训练的最近进步,并提出RLIE-A3C。 RLIE-A3C并行列达多个代理,并且能够通过RLIE-DQN实现高达6倍的训练加速,同时遭受平均精度的损失。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号