【24h】

From TV-L1 to Gated Recurrent Nets

机译:从TV-L 1 到门控循环网

获取原文

摘要

TV-L1 is a classical diffusion-reaction model for low-level vision tasks, which can be solved by a duality based iterative algorithm. Considering the recent success of end-to-end learned representations, we propose a TV-LSTM network to unfold the duality based iterations into long short-term memory (LSTM) cells. To provide a trainable network, we relax the difference operators in the gate and cell update of TV-LSTM to trainable parameters. Then, the proposed end-to-end trainable TV-LSTMs can be naturally connected with various task-specific networks, e.g., optical flow estimation and image decomposition. Extensive experiments on optical flow estimation and structure + texture decomposition have demonstrated the effectiveness and efficiency of the proposed method.
机译:电视台 1 是用于低级视觉任务的经典扩散反应模型,可以通过基于对偶的迭代算法解决。考虑到端到端学习表示法最近的成功,我们提出了一个TV-LSTM网络,将基于对偶的迭代展开到长短期记忆(LSTM)单元中。为了提供可训练的网络,我们将TV-LSTM的门和单元更新中的差异运算符放宽为可训练的参数。然后,提出的端到端可训练TV-LSTM可以自然地与各种特定于任务的网络连接,例如光流估计和图像分解。大量的光流估计和结构+纹理分解实验证明了该方法的有效性和效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号