首页> 外文会议>International Conference on Culture-oriented Science and Technology >Spatial-Temporal Network for No Reference Video Quality Assessment Based on Saliency
【24h】

Spatial-Temporal Network for No Reference Video Quality Assessment Based on Saliency

机译:基于显着性的空时网络无参考视频质量评估

获取原文

摘要

With the increasing use of digital video, the importance of establishing a high-performance no reference video quality assessment (NR VQA) model is increasing. How to effectively assess the properties of the human visual system (HVS) in a data-driven manner is one of the difficulties in NR VQA. In this paper, we propose a spatio-temporal network model based on saliency. The model has two branches: the spatio-temporal branch and the saliency branch. We propose a basic spatio-temporal network model in the spatio-temporal branch and use it to predict the score after the spatial distortion effected by temporal information. Then we extract the salient features of the current video frame and merge it with the spatial distortion to predict a result that is more in line with human perception. Finally, the two scores are automatically weighted to obtain the score of the current video. The performance of the proposed method has been verified on two databases, LIVE and CSIQ. The training results show that the method proposed in this paper can basically conform to the subjective perception of human eyes, and the performance of the network can also be better than most current methods without reference video.
机译:随着数字视频的使用越来越多,建立高性能无参考视频质量评估(NR VQA)模型的重要性越来越高。如何以数据驱动的方式有效评估人类视觉系统(HVS)的属性是NR VQA的难题之一。本文提出了一种基于显着性的时空网络模型。该模型有两个分支:时空分支和显着性分支。我们在时空分支中提出了一个基本的时空网络模型,并用它来预测时空信息影响的空间畸变后的得分。然后,我们提取当前视频帧的显着特征,并将其与空间失真合并,以预测更符合人类感知的结果。最后,两个分数自动加权以获得当前视频的分数。该方法的性能已经在两个数据库LIVE和CSIQ上得到了验证。训练结果表明,本文提出的方法可以基本符合人眼的主观感知,并且网络性能也可以优于目前大多数没有参考视频的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号