首页> 外文会议>IEEE International Conference on Multimedia Expo Workshops >Full-Reference And No-Reference Quality Assessment For Compressed User-Generated Content Videos
【24h】

Full-Reference And No-Reference Quality Assessment For Compressed User-Generated Content Videos

机译:压缩用户生成的内容视频的全引用和缺口质量评估

获取原文

摘要

With the development of video capture devices and network technology, recent years have witnessed an exponential increase of user-generated content (UGC) videos in various sharing platforms. Comparing to professionally generated content (PGC) videos, these UGC videos are generally captured by amateurs using smartphone cameras in various life scenes and contains various in-capture distortions. Besides, these videos undergo multi stages that may affect the perceptual quality before finally viewed by end-users. Complex and diverse distortion types bring difficulties to objective quality assessment. In this paper, we present a data-driven video quality assessment (VQA) method for UGC videos based on a convolutional neural network (CNN) and a Transformer. Specifically, the CNN backbone is used to extract features from frames and the output is fed to the Transformer encoder for the prediction of quality score. The proposed method can be used for both full-reference (FR) and no-reference (NR) VQA with slight adaptations. Our method ranks first and second in MOS track and DMOS track of the challenge on quality assessment of compressed UGC videos [1], respectively.
机译:随着视频捕获设备和网络技术的发展,近年来已经见证了各种共享平台中的用户生成的内容(UGC)视频的指数增加。与专业生成的内容(PGC)视频相比,这些UGC视频通常由在各种生命场景中使用智能手机摄像机的业务手机捕获,并包含各种捕获失真。此外,这些视频将在最终用户终于查看之前进行多个可能影响感知质量的阶段。复杂和不同的扭曲类型带来客观质量评估困难。在本文中,我们为基于卷积神经网络(CNN)和变压器的UGC视频提供了一种数据驱动的视频质量评估(VQA)方法。具体地,CNN主干用于从帧中提取特征,并且输出被馈送到变压器编码器以预测质量分数。该方法可用于全参考(FR)和无参考(NR)VQA,具有轻微的适应。我们的方法在MOS轨道中排名第一和第二,分别对压缩UGC视频的质量评估的挑战的DMOS追踪。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号