...
首页> 外文期刊>IEEE transactions on multimedia >Deep Reference Generation With Multi-Domain Hierarchical Constraints for Inter Prediction
【24h】

Deep Reference Generation With Multi-Domain Hierarchical Constraints for Inter Prediction

机译:具有多域分层约束的深度参考生成,用于帧间预测

获取原文
获取原文并翻译 | 示例

摘要

Inter prediction is an important module in video coding for temporal redundancy removal, where similar reference blocks are searched from previously coded frames and employed to predict the block to be coded. Although existing video codecs can estimate and compensate for block-level motions, their inter prediction performance is still heavily affected by the remaining inconsistent pixel-wise displacement caused by irregular rotation and deformation. In this paper, we address the problem by proposing a deep frame interpolation network to generate additional reference frames in coding scenarios. First, we summarize the previous adaptive convolutions used for frame interpolation and propose a factorized kernel convolutional network to improve the modeling capacity and simultaneously keep its compact form. Second, to better train this network, multi-domain hierarchical constraints are introduced to regularize the training of our factorized kernel convolutional network. For spatial domain, we use a gradually down-sampled and up-sampled auto-encoder to generate the factorized kernels for frame interpolation at different scales. For quality domain, considering the inconsistent quality of the input frames, the factorized kernel convolution is modulated with quality-related features to learn to exploit more information from high quality frames. For frequency domain, a sum of absolute transformed difference loss that performs frequency transformation is utilized to facilitate network optimization from the view of coding performance. With the well-designed frame interpolation network regularized by multi-domain hierarchical constraints, our method surpasses HEVC on average 3.8% BD-rate saving for the luma component under the random access configuration and also obtains on average 0.83% BD-rate saving over the upcoming VVC.
机译:帧间预测是用于时间冗余拆除的视频编码中的重要模块,其中从先前编码的帧搜索类似的参考块,并采用以预测要编码的块。尽管现有的视频编解码器可以估计和补偿块级运动,但它们的帧间预测性能仍然受到由不规则旋转和变形引起的剩余不一致像素明显位移的严重影响。在本文中,我们通过提出深帧插值网络来解决问题以在编码方案中生成附加参考帧。首先,我们总结了用于帧插值的先前自适应卷积,并提出分解内核卷积网络以提高建模容量,同时保持其紧凑的形式。其次,为了更好地列车这个网络,引入了多域分层约束来规则化我们分解内核卷积网络的培训。对于空间域,我们使用逐渐下行的采样和上采样的自动编码器来为不同尺度生成用于帧插值的分解内核。对于质量域,考虑到输入帧的不一致质量,用质量相关的功能调制分解内核卷积,以便学习从高质量帧中利用更多信息。对于频域,利用执行频率变换的绝对变换差异损耗的总和来促进从编码性能的视图中的网络优化。通过多域分层约束规范化的精心设计的帧插值网络,我们的方法将在随机接入配置下平均为LUMA组件的3.8%BD速率节省3.8%BD速率,并在平均为0.83%的BD速率即将推出的VVC。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号