首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation
【24h】

Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation

机译:任务细化学习可提高无监督域自适应的准确性和稳定性

获取原文

摘要

Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018a), combining LSTMs with pivot-based methods, has yielded significant progress in unsupervised domain adaptation. However, this approach is still challenged by the large pivot detection problem that should be solved, and by the inherent instability of LSTMs. In this paper we propose a Task Refinement Learning (TRL) approach, in order to solve these problems. Our algorithms iteratively train the PBLM model, gradually increasing the information exposed about each pivot. TRL-PBLM achieves state-of-the-art accuracy in six domain adaptation setups for sentiment classification. Moreover, it is much more stable than plain PBLM across model configurations, making the model much better fitted for practical use.~(1)
机译:基于枢轴的语言建模(PBLM)(Ziser和Reichart,2018a)将LSTM与基于枢轴的方法相结合,在无监督域自适应方面取得了重大进展。但是,该方法仍然受到应解决的大型枢轴检测问题以及LSTM固有的不稳定性的挑战。在本文中,我们提出了一种任务细化学习(TRL)方法,以解决这些问题。我们的算法会反复训练PBLM模型,并逐渐增加有关每个枢纽的公开信息。 TRL-PBLM在六个用于情感分类的域自适应设置中实现了最先进的准确性。此外,它在整个模型配置中都比普通PBLM稳定得多,从而使模型更适合实际使用。〜(1)

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号