首页> 外文会议>Conference on empirical methods in natural language processing >Speeding up Reinforcement Learning-based Information Extraction Training using Asynchronous Methods
【24h】

Speeding up Reinforcement Learning-based Information Extraction Training using Asynchronous Methods

机译:使用异步方法加快基于强化学习的信息提取训练

获取原文

摘要

RLIE-DQN is a recently proposed Reinforcement Learning-based Information Extraction (IE) technique which is able to incorporate external evidence during the extraction process RLIE-DQN trains a single agent sequentially, training on one instance at a time. This results in significant training slowdown which is undesirable We leverage recent advances in parallel RL training using asynchronous methods and propose RLIE-A3C. RLIE-A3C trains multiple agents in parallel and is able to achieve upto 6x training speedup over RLIE-DQN, while suffering no loss in average accuracy.
机译:RLIE-DQN是最近提出的基于强化学习的信息提取(IE)技术,该技术能够在提取过程中合并外部证据。RLIE-DQN依次训练单个代理,一次训练一个实例。这导致培训速度显着降低,这是不希望的,我们利用异步方法利用并行RL培训的最新进展并提出RLIE-A3C。 RLIE-A3C可以并行训练多个代理,并且可以达到RLIE-DQN的6倍训练速度,同时平均精度不会受到任何损失。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号