首页> 外文OA文献 >Deeper and wider fully convolutional network coupled with conditional random fields for scene labeling
【2h】

Deeper and wider fully convolutional network coupled with conditional random fields for scene labeling

机译:更深更广的全卷积网络,结合条件随机场进行场景标记

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning. One of the advantages of DCNNs is their representation robustness to object locations, which is useful for object recognition tasks. However, this also discards spatial information, which is useful when dealing with topological information of the image (e.g. scene labeling, face recognition). In this paper, we propose a deeper and wider network architecture to tackle the scene labeling task. The depth is achieved by incorporating predictions from multiple early layers of the DCNN. The width is achieved by combining multiple outputs of the network. We then further refine the parsing task by adopting graphical models (GMs) as a post-processing step to incorporate spatial and contextual information into the network. The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.
机译:由于深度卷积神经网络(DCNN)在特征学习中的强大功能,因此已在许多计算机视觉任务中得到了成功的应用。 DCNN的优点之一是它们对对象位置的表示稳健性,这对于对象识别任务很有用。但是,这也会丢弃空间信息,这在处理图像的拓扑信息(例如场景标记,面部识别)时非常有用。在本文中,我们提出了一种更深,更广泛的网络体系结构来解决场景标记任务。通过合并来自DCNN多个早期层的预测来实现深度。通过组合网络的多个输出来实现宽度。然后,我们通过采用图形模型(GM)作为将空间和上下文信息合并到网络中的后处理步骤,进一步完善解析任务。用于更深,更广泛的卷积网络以及图形模型的新策略已在PASCAL-Context数据集上显示出令人鼓舞的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号