首页> 外文期刊>Neurocomputing >Stereoscopic video quality assessment based on 3D convolutional neural networks
【24h】

Stereoscopic video quality assessment based on 3D convolutional neural networks

机译:基于3D卷积神经网络的立体视频质量评估

获取原文
获取原文并翻译 | 示例

摘要

The research of stereoscopic video quality assessment (SVQA) plays an important role for promoting the development of stereoscopic video system. Existing SVQA metrics rely on hand-crafted features, which is inaccurate and time-consuming because of the diversity and complexity of stereoscopic video distortion. This paper introduces a 3D convolutional neural networks (CNN) based SVQA framework that can model not only local spatio-temporal information but also global temporal information with cubic difference video patches as input. First, instead of using hand-crafted features, we design a 3D CNN architecture to automatically and effectively capture local spatio-temporal features. Then we employ a quality score fusion strategy considering global temporal clues to obtain final video-level predicted score. Extensive experiments conducted on two public stereoscopic video quality datasets show that the proposed method correlates highly with human perception and outperforms state-of-the-art methods by a large margin. We also show that our 3D CNN features have more desirable property for SVQA than hand-crafted features in previous methods, and our 3D CNN features together with support vector regression (SVR) can further boost the performance. In addition, with no complex preprocessing and GPU acceleration, our proposed method is demonstrated computationally efficient and easy to use. (C) 2018 Elsevier B.V. All rights reserved.
机译:立体视频质量评估(SVQA)的研究对于促进立体视频系统的发展起着重要的作用。现有的SVQA指标依赖于手工制作的功能,由于立体视频失真的多样性和复杂性,这种功能不准确且费时。本文介绍了一种基于3D卷积神经网络(CNN)的SVQA框架,该框架不仅可以建模局部时空信息,而且还可以使用三次差分视频块作为输入来建模全局时态信息。首先,我们不使用手工制作的功能,而是设计一种3D CNN架构来自动有效地捕获本地时空特征。然后,我们采用考虑全局时间线索的质量得分融合策略,以获得最终的视频级预测得分。对两个公共立体视频质量数据集进行的大量实验表明,该方法与人类感知高度相关,并且在很大程度上领先于最新方法。我们还显示,与以前方法中的手工制作功能相比,我们的3D CNN功能对SVQA具有更理想的性能,并且我们的3D CNN功能以及支持向量回归(SVR)可以进一步提高性能。此外,在没有复杂的预处理和GPU加速的情况下,我们提出的方法在计算上非常有效且易于使用。 (C)2018 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号