首页> 外文会议>International conference on digital image processing >Visual Attention based Bag-of-Words Model for Image Classification
【24h】

Visual Attention based Bag-of-Words Model for Image Classification

机译:基于视觉注意的词袋模型用于图像分类

获取原文

摘要

Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
机译:词袋是用于图像分类的经典方法。核心问题是如何计算视觉单词的频率以及选择哪些视觉单词。在本文中,我们提出了一种基于视觉注意的词袋模型(VABOW模型),用于图像分类任务。 VABOW模型利用视觉注意方法生成显着图,并将显着图作为加权矩阵来指示视觉单词出现频率的统计过程。另一方面,VABOW模型结合了形状,颜色和纹理提示,并使用L1正则化逻辑回归方法来选择最相关和最有效的特征。我们在两个数据集上将我们的方法与传统的基于词袋的方法进行了比较,结果表明,我们的VABOW模型优于最新的图像分类方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号