首页> 中文期刊>南京信息工程大学学报 >关系挖掘驱动的视频描述自动生成

关系挖掘驱动的视频描述自动生成

     

摘要

视频的自动描述任务是计算机视觉领域的一个热点问题.视频描述语句的生成过程需要自然语言处理的知识,并且能够满足输入(视频帧序列)和输出(文本词序列)的长度可变.为此本文结合了最近机器翻译领域取得的进展,设计了基于编码-解码框架的双层LSTM模型.在实验过程中,本文基于构建深度学习框架时重要的表示学习思想,利用卷积神经网络(CNN)提取视频帧的特征向量作为序列转换模型的输入,并比较了不同特征提取方法下对双层LSTM视频描述模型的影响.实验结果表明,本文的模型具有学习序列知识并转化为文本表示的能力.%Video description has received increased interest in the field of computer vision. The process of genera-ting video descriptions needs the technology of natural language processing, and the capacity to allow both the lengths of input (sequence of video frames) and output (sequence of description words) to be variable.To this end, this paper uses the recent advances in machine translation,and designs a two-layer LSTM ( Long Short-Term Memory) model based on the encoder-decoder architecture.Since the deep neural network can learn appropriate representation of input data,we extract the feature vectors of the video frames by convolution neural network ( CNN) and take them as the input sequence of the LSTM model. Finally, we compare the influences of different feature extraction methods on the LSTM video description model. The results show that the model in this paper is able to learn to transform sequence of knowledge representation to natural language.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号