...
首页> 外文期刊>Computer speech and language >Deep reinforcement and transfer learning for abstractive text summarization: A review
【24h】

Deep reinforcement and transfer learning for abstractive text summarization: A review

机译:抽象文本摘要的深度加强和转移学习:综述

获取原文
获取原文并翻译 | 示例
           

摘要

Automatic Text Summarization (ATS) is an important area in Natural Language Processing (NLP) with the goal of shortening a long text into a more compact version by conveying the most important points in a readable form. ATS applications continue to evolve and utilize effective approaches that are being evaluated and implemented by researchers. State-of-the-Art (SotA) technologies that demonstrate cutting-edge performance and accuracy in abstractive ATS are deep neural sequence-to-sequence models, Reinforcement Learning (RL) approaches, and Transfer Learning (TL) approaches, including Pre-Trained Language Models (PTLMs). The graph-based Transformer architecture and PTLMs have influenced tremendous advances in NLP applications. Additionally, the incorporation of recent mechanisms, such as the knowledge-enhanced mechanism, significantly enhanced the results. This study provides a comprehensive review of recent research advances in the area of abstractive text summarization for works spanning the past six years. Past and present problems are described, as well as their proposed solutions. In addition, abstractive ATS datasets and evaluation measurements are also highlighted. The paper concludes by comparing the best models and discussing future research directions.
机译:自动文本摘要(ATS)是自然语言处理(NLP)的重要领域,其目标是通过以可读形式传送最重要的点来将长文本缩短为更紧凑的版本。 ATS应用程序继续发展并利用研究人员进行评估和实施的有效方法。现有技术(SOTA)技术证明了尖端性能和抽象ATS的准确性是深度神经序列到序列模型,加强学习(RL)方法,以及转移学习(TL)方法,包括预先培训的语言模型(PTLMS)。基于图形的变压器架构和PTLMS影响了NLP应用中的巨大进步。此外,结合最近的机制,例如知识增强的机制,显着提高了结果。本研究对过去六年来跨越跨越跨越的抽象文本摘要领域的研究进展综述了全面审查。描述了过去和目前的问题,以及他们提出的解决方案。此外,还突出显示了抽象ATS数据集和评估测量。本文通过比较最佳模型和讨论未来的研究方向的结论。

著录项

  • 来源
    《Computer speech and language》 |2022年第1期|101276.1-101276.43|共43页
  • 作者单位

    Department of Artificial Intelligence Faculty of Computer Science and Information Technology Universiti Malaya 50603 Kuala Lumpur Malaysia;

    Department of Artificial Intelligence Faculty of Computer Science and Information Technology Universiti Malaya 50603 Kuala Lumpur Malaysia;

    Department of Artificial Intelligence Faculty of Computer Science and Information Technology Universiti Malaya 50603 Kuala Lumpur Malaysia;

    Department of computing and cyber security Texas A&M San Antonio Texas United States;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Abstractive summarization; Sequence-to-sequence; Reinforcement learning; Pre-trained models;

    机译:抽象摘要;序列到序列;加强学习;预先接受研磨的型号;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号