首页> 外文会议>Annual conference on Neural Information Processing Systems >Bidirectional Recurrent Convolutional Networks for Multi-Frame Super-Resolution
【24h】

Bidirectional Recurrent Convolutional Networks for Multi-Frame Super-Resolution

机译:用于多帧超分辨率的双向递归卷积网络

获取原文

摘要

Super resolving a low-resolution video is usually handled by either single-image super-resolution (SR) or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video super-resolution. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, which often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term contextual information of temporal sequences well, we propose a bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used recurrent full connections are replaced with weight-sharing convolutional connections and 2) conditional convolutional connections from previous input layers to the current hidden layer are added for enhancing visual-temporal dependency modelling. With the powerful temporal dependency modelling, our model can super resolve videos with complex motions and achieve state-of-the-art performance. Due to the cheap convolution operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame methods.
机译:超分辨率低分辨率视频通常由单图像超分辨率(SR)或多帧SR处理。单图像SR独立地处理每个视频帧,并忽略了视频帧的固有时间依赖性,而后者在视频超分辨率中实际上起着非常重要的作用。多帧SR通常提取运动信息(例如光流)以对时间依赖性进行建模,这通常显示出高计算成本。考虑到递归神经网络(RNN)可以很好地建模时间序列的长期上下文信息,我们提出了一种有效的多帧SR双向递归卷积网络。与普通RNN不同,1)用权重卷积连接替换常用的循环全连接,以及2)添加从先前输入层到当前隐藏层的条件卷积连接,以增强视觉-时间依存关系建模。借助强大的时间依赖性建模,我们的模型可以超级解析具有复杂运动的视频,并实现最新的性能。由于廉价的卷积运算,我们的模型具有较低的计算复杂度,并且比其他多帧方法快几个数量级。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号