...
首页> 外文期刊>Neurocomputing >G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition
【24h】

G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition

机译:G-MS2F:基于GoogLeNet的深度CNN的多阶段特征融合,用于场景识别

获取原文
获取原文并翻译 | 示例

摘要

Scene recognition plays an important role in the task of visual information retrieval, segmentation and image/ video understanding. Traditional approaches for scene recognition usually utilize handcrafted features and have the drawbacks of poor representation ability, which can be improved by employing deep convolutional neural network (CNN) features that contain more semantic and structure information and thus possess more discriminative ability via multiple linear and non-linear transformations. However, an amount of detailed information may be lost when only the final output features which have gone through a certain number of transformations are applied to scene recognition. The features which are generated from the intermediate layers are not fully utilized. In this work, the GoogLeNet model is employed and divided into three parts of layers from bottom to top. The output features from each of the three parts are applied for scene recognition, which leads to the proposed GoogLeNet based multi-stage feature fusion (G-MS2F). What's more, the product rule is used to generate the final decision for scene recognition from the three outputs corresponding to the three parts of the proposed model. The experimental results demonstrate that the proposed model is superior to a number of state-of-the-art CNN models for scene recognition, and obtains the recognition accuracy of 92.90%, 79.63% and 64.06% on the benchmark scene recognition datasets Scenel5, M1T67 and SUN397, respectively.
机译:场景识别在视觉信息检索,分割和图像/视频理解的任务中起着重要作用。传统的场景识别方法通常利用手工制作的特征,并且具有表示能力差的缺点,这可以通过使用深度卷积神经网络(CNN)特征来改善,该特征包含更多的语义和结构信息,从而通过多个线性和非线性方式具有更多的判别能力。 -线性变换。但是,当仅将经过一定数量转换的最终输出特征应用于场景识别时,可能会丢失大量的详细信息。从中间层产生的特征没有被充分利用。在这项工作中,采用GoogLeNet模型并将其从下到上分为三层。三个部分中每个部分的输出特征都用于场景识别,这导致了提出的基于GoogLeNet的多阶段特征融合(G-MS2F)。此外,乘积规则用于根据与建议模型的三个部分相对应的三个输出生成场景识别的最终决策。实验结果表明,该模型在场景识别方面优于许多最新的CNN模型,在基准场景识别数据集Scenel5,M1T67上的识别精度达到92.90%,79.63%和64.06%。和SUN397。

著录项

  • 来源
    《Neurocomputing 》 |2017年第15期| 188-197| 共10页
  • 作者单位

    Tongji Univ, Dept Comp Sci & Technol, Shanghai 201804, Peoples R China|Tongji Univ, Minist Educ, Key Lab Embedded Syst & Serv Comp, Shanghai 200092, Peoples R China|Jinggangshan Univ, Coll Math & Phys, Jian 343009, Jiangxi, Peoples R China;

    Tongji Univ, Dept Comp Sci & Technol, Shanghai 201804, Peoples R China|Tongji Univ, Minist Educ, Key Lab Embedded Syst & Serv Comp, Shanghai 200092, Peoples R China;

    City Univ Hong Kong, Dept Comp Sci, Hong Kong, Hong Kong, Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Scene recognition; Convolutional neural network; Multi-stage feature; Feature fusion; GoogLeNet;

    机译:场景识别;卷积神经网络;多阶段特征;特征融合;GoogLeNet;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号