首页> 外文期刊>Neurocomputing >Local visual feature fusion via maximum margin multimodal deep neural network
【24h】

Local visual feature fusion via maximum margin multimodal deep neural network

机译:通过最大余量多模态深度神经网络进行局部视觉特征融合

获取原文
获取原文并翻译 | 示例

摘要

In this letter, we consider improving the image categorization performance by exploiting multiple local descriptors on the image. To achieve this goal, a novel deep learning configuration called maximum margin multimodal deep neural network (3mDNN) is proposed to learn joint feature from different data views. The local feature representations encoded by 3mDNN exhibit two significant advantages: (1) involving the information of multiple descriptors and (2) exhibiting discriminative ability. The whole deep architecture is well solved by the typical back propagation (BP) method and its performances are verified on three benchmark image datasets. (C) 2015 Elsevier B.V. All rights reserved.
机译:在这封信中,我们考虑通过利用图像上的多个局部描述符来改善图像分类性能。为了实现此目标,提出了一种称为最大余量多模态深度神经网络(3mDNN)的新型深度学习配置,以从不同的数据视图中学习联合特征。由3mDNN编码的局部特征表示具有两个显着优点:(1)包含多个描述符的信息;(2)表现出判别能力。整个深度架构可以通过典型的反向传播(BP)方法很好地解决,并在三个基准图像数据集上验证了其性能。 (C)2015 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号