首页> 外文会议>International Conference on Optoelectronic Imaging and Multimedia Technology >No-reference video quality assessment based on spatiotemporal slice images and deep convolutional neural networks
【24h】

No-reference video quality assessment based on spatiotemporal slice images and deep convolutional neural networks

机译:基于时空切片图像和深卷积神经网络的无参考视频质量评估

获取原文

摘要

Most learning-based no-reference (NR) video quality assessment (VQA) needs to be trained with a lot of subjectivequality scores. However, it is currently difficult to obtain a large volume of subjective scores for videos. Inspired by thesuccess of full-reference VQA methods based on the spatiotemporal slice (STS) images in the extraction of perceptualfeatures and evaluation of video quality, this paper adopts multi-directional video STS images, which are imagescomposed of multi-directional sections of video data, to deal with the lacking of subjective quality scores. By samplingthe STS images of video into image patches and adding noise to the quality labels of patches, a successful NR VQAmodel based on multi-directional STS images and neural network training is proposed. Specifically, first, we select thesubjective database that currently contains the largest number of real distortion videos as the test set. Second, we performmulti-directional STS extraction on the videos and sample the local patches from the multi -directional STS to augmentthe training sample set. Besides, we add some noise to the quality label of the local patches. Third, a reasonable deepneural network is constructed and trained to obtain a local quality prediction model for each patch in the STS image, andthen the quality of an entire video is obtained by averaging the model prediction results of multi -directional STS images.Finally, the experiment results indicate that the proposed method tackles the insufficiency of training samples in smallsubjective VQA dataset and obtains a high correlation with the subjective evaluation.
机译:基于大多数学习的无参考(NR)视频质量评估(VQA)需要培训很多主观质量分数。但是,目前难以获得视频的大量主观评分。受到启发的启发基于时空切片(STS)图像的全引用VQA方法的成功在提取感知视频质量的特点和评估,本文采用多向视频STS图像,这是图像由视频数据的多向部分组成,以应对缺乏主观质量分数。通过抽样将视频的STS图像分为图像补丁并将噪声添加到补丁的质量标签,成功的NR VQA提出了基于多向STS图像和神经网络训练的模型。具体来说,首先,我们选择主观数据库当前包含作为测试集的最大实数失真视频。其次,我们表演视频上的多向STS提取并将多向STS的本地补丁进行采样到增强训练样本集。此外,我们为本地补丁的质量标签添加了一些噪音。第三,深入合理构建和培训神经网络,以获得STS图像中的每个补丁的本地质量预测模型,然后通过平均多向STS图像的模型预测结果来获得整个视频的质量。最后,实验结果表明,该方法在小的情况下解决训练样本的不足主观VQA数据集并获得主观评估的高相关。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号