首页> 外文期刊>IEEE Transactions on Knowledge and Data Engineering >Read, Watch, Listen, and Summarize: Multi-Modal Summarization for Asynchronous Text, Image, Audio and Video
【24h】

Read, Watch, Listen, and Summarize: Multi-Modal Summarization for Asynchronous Text, Image, Audio and Video

机译:阅读,观看,收听和汇总:异步文本,图像,音频和视频的多模式汇总

获取原文
获取原文并翻译 | 示例
           

摘要

Automatic text summarization is a fundamental natural language processing (NLP) application that aims to condense a source text into a shorter version. The rapid increase in multimedia data transmission over the Internet necessitates multi-modal summarization (MMS) from asynchronous collections of text, image, audio, and video. In this work, we propose an extractive MMS method that unites the techniques of NLP, speech processing, and computer vision to explore the rich information contained in multi-modal data and to improve the quality of multimedia news summarization. The key idea is to bridge the semantic gaps between multi-modal content. Audio and visual are main modalities in the video. For audio information, we design an approach to selectively use its transcription and to infer the salience of the transcription with audio signals. For visual information, we learn the joint representations of text and images using a neural network. Then, we capture the coverage of the generated summary for important visual information through text-image matching or multi-modal topic modeling. Finally, all the multi-modal aspects are considered to generate a textual summary by maximizing the salience, non-redundancy, readability, and coverage through the budgeted optimization of submodular functions. We further introduce a publicly available MMS corpus in English and Chinese. 1 The experimental results obtained on our dataset demonstrate that our methods based on image matching and image topic framework outperform other competitive baseline methods.
机译:自动文本摘要是一种基本的自然语言处理(NLP)应用程序,旨在将源文本压缩为较短的版本。 Internet上多媒体数据传输的迅速增长,需要来自文本,图像,音频和视频的异步集合的多模式摘要(MMS)。在这项工作中,我们提出了一种提取性MMS方法,该方法结合了NLP,语音处理和计算机视觉技术,以探索多模式数据中包含的丰富信息,并提高多媒体新闻摘要的质量。关键思想是弥合多模式内容之间的语义鸿沟。音频和视频是视频中的主要形式。对于音频信息,我们设计了一种方法来有选择地使用其转录并用音频信号推断转录的显着性。对于视觉信息,我们使用神经网络学习文本和图像的联合表示。然后,我们通过文本图像匹配或多模式主题建模来捕获所生成摘要的覆盖范围,以获取重要的视觉信息。最后,通过对子模块功能进行预算优化,最大化了显着性,非冗余性,可读性和覆盖范围,所有多模式方面都被认为可以生成文本摘要。我们还将以英文和中文介绍公开可用的MMS语料库。 1从我们的数据集中获得的实验结果表明,基于图像匹配和图像主题框架的方法优于其他竞争基准方法。

著录项

  • 来源
  • 作者单位

    Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China|Univ Chinese Acad Sci, Beijing 100190, Peoples R China;

    Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China|Univ Chinese Acad Sci, Beijing 100190, Peoples R China;

    Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China|Univ Chinese Acad Sci, Beijing 100190, Peoples R China;

    Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China|Univ Chinese Acad Sci, Beijing 100190, Peoples R China;

    Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100864, Peoples R China|Chinese Acad Sci, CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100864, Peoples R China|Univ Chinese Acad Sci, Beijing 100049, Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Summarization; multimedia; multi-modal; cross-modal; natural language processing; computer vision;

    机译:摘要;多媒体;多模态;跨模型;自然语言处理;计算机愿景;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号