首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Context Dependent Encoding Using Convolutional Dynamic Networks
【24h】

Context Dependent Encoding Using Convolutional Dynamic Networks

机译:使用卷积动态网络的上下文相关编码

获取原文
获取原文并翻译 | 示例
       

摘要

Perception of sensory signals is strongly influenced by their context, both in space and time. In this paper, we propose a novel hierarchical model, called convolutional dynamic networks, that effectively utilizes this contextual information, while inferring the representations of the visual inputs. We build this model based on a predictive coding framework and use the idea of empirical priors to incorporate recurrent and top-down connections. These connections endow the model with contextual information coming from temporal as well as abstract knowledge from higher layers. To perform inference efficiently in this hierarchical model, we rely on a novel scheme based on a smoothing proximal gradient method. When trained on unlabeled video sequences, the model learns a hierarchy of stable attractors, representing low-level to high-level parts of the objects. We demonstrate that the model effectively utilizes contextual information to produce robust and stable representations for object recognition in video sequences, even in case of highly corrupted inputs.
机译:感觉信号的感知在空间和时间上都受到其上下文的强烈影响。在本文中,我们提出了一种新颖的层次模型,称为卷积动态网络,该模型可以有效利用此上下文信息,同时推断视觉输入的表示形式。我们基于预测编码框架构建此模型,并使用经验先验的思想来合并递归和自上而下的连接。这些联系为模型提供了来自时间的上下文信息以及来自更高层的抽象知识。为了在此分层模型中有效执行推理,我们依赖于基于平滑近端梯度方法的新颖方案。当对未标记的视频序列进行训练时,该模型将学习稳定吸引子的层次结构,表示对象的低级到高级部分。我们证明了该模型有效利用上下文信息来生成鲁棒和稳定的表示形式,以用于视频序列中的对象识别,即使在输入严重受损的情况下也是如此。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号