首页> 外文期刊>Image Processing, IEEE Transactions on >Network-Based H.264/AVC Whole-Frame Loss Visibility Model and Frame Dropping Methods
【24h】

Network-Based H.264/AVC Whole-Frame Loss Visibility Model and Frame Dropping Methods

机译:基于网络的H.264 / AVC全帧丢失可见性模型和丢帧方法

获取原文
获取原文并翻译 | 示例

摘要

We examine the visual effect of whole-frame loss by different decoders. Whole-frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observers who respond to each glitch they spot. We found that about 39% of whole-frame losses of B frames are not observed by any of the subjects, and over 58% of the B frame losses are observed by 20% or fewer of the subjects. Using simple predictive features that can be calculated inside a network node with no access to the original video and no pixel level reconstruction of the frame, we develop models that can predict the visibility of whole B frame losses. The models are then used in a router to predict the visual impact of a frame loss and perform intelligent frame dropping to relieve network congestion. Dropping frames based on their visual scores proves superior to random dropping of B frames.
机译:我们研究了不同解码器造成的全帧丢失的视觉效果。在H.264 / AVC压缩视频中引入了全帧丢失,然后由两个具有不同共同隐藏效果的不同解码器对它们进行解码:帧复制和帧插值。这些视频是由人类观察者观看的,他们会对他们发现的每个故障做出回应。我们发现,任何受试者都未观察到B帧全帧丢失的39%,而20%或更少的受试者未观察到超过58%的B帧丢失。使用可以在网络节点内部计算而无需访问原始视频且不对帧进行像素级重构的简单预测功能,我们开发了可以预测整个B帧丢失的可见性的模型。然后,将这些模型用于路由器中,以预测帧丢失的视觉影响并执行智能丢帧以缓解网络拥塞。根据它们的视觉得分丢弃帧被证明优于随机丢弃B帧。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号