首页> 外文会议>IEEE International Conference on Multimedia and Expo >Efficient Video Compressed Sensing Reconstruction via Exploiting Spatial-Temporal Correlation With Measurement Constraint
【24h】

Efficient Video Compressed Sensing Reconstruction via Exploiting Spatial-Temporal Correlation With Measurement Constraint

机译:通过利用测量约束的空间关键相关性有效视频压缩传感重建

获取原文

摘要

Recent deep learning-based video compressed sensing (VCS) methods have achieved promising results but still suffer from numerous hyper-parameters and inflexibility. This paper proposes a novel network for VCS, named STM-Net, to fast recover high-quality video frames by optionally exploiting Spatial-Temporal information with a Measurement constraint. Combining the merits of adaptive sampling and adaptive shrinkage-thresholding, we first propose an improved ISTA-Net+ for framewise independent reconstruction, called Unfolding Adaptive Shrinkage-Thresholding Network (UAST-Net). To get further non-key frames reconstruction improvement, we develop a two-phase joint deep reconstruction, including an Occlusion-Aware Temporal Alignment to avoid irrelevant information compensation and a Multiple Frames Fusion with proposed Spatial-Temporal Feature Weighting (STFW) module to guide attractive content extraction and discriminative features generation. Besides, we develop a measurement loss to reduce the solution space to facilitate network optimization. Experimental results demonstrate the superiority of the proposed STM-Net over the existing methods.
机译:最近基于深度学习的视频压缩感应(VCS)方法已经取得了有希望的结果,但仍然遭受许多超参数和不灵活性。本文提出了一种用于VCS,名为STM-Net的VCS的新型网络,通过选择具有测量约束的空间时间信息来快速恢复高质量的视频帧。结合自适应采样和自适应收缩阈值的优点,我们首先提出了一种改进的ISTA-Net +,用于框架独立的重建,称为展开自适应收缩阈值网络(UAST-Net)。为了获得进一步的非关键帧重建改进,我们开发了一个双相联的深度重建,包括遮挡信息对齐,以避免无关的信息补偿和具有所提出的空间 - 时间特征加权(STFW)模块的多帧融合来引导有吸引力的内容提取和鉴别特征生成。此外,我们开发了一种测量损失,以减少解决方案空间,便于网络优化。实验结果证明了所提出的STM-NET对现有方法的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号