首页> 外文会议>Asian conference on computer vision >Adaptive Unsupervised Multi-view Feature Selection for Visual Concept Recognition
【24h】

Adaptive Unsupervised Multi-view Feature Selection for Visual Concept Recognition

机译:用于视觉概念识别的自适应无监督多视图特征选择

获取原文

摘要

To reveal and leverage the correlated and complemental information between different views, a great amount of multi-view learning algorithms have been proposed in recent years. However, unsupervised feature selection in multi-view learning is still a challenge due to lack of data labels that could be utilized to select the discriminative features. Moreover, most of the traditional feature selection methods are developed for the single-view data, and are not directly applicable to the multi-view data. Therefore, we propose an unsupervised learning method called Adaptive Unsupervised Multi-view Feature Selection (AUMFS) in this paper. AUMFS attempts to jointly utilize three kinds of vital information, i.e., data cluster structure, data similarity and the correlations between different views, contained in the original data together for feature selection. To achieve this goal, a robust sparse regression model with the l_(2,1)-norm penalty is introduced to predict data cluster labels, and at the same time, multiple view-dependent visual similar graphs are constructed to flexibly model the visual similarity in each view. Then, AUMFS integrates data cluster labels prediction and adaptive multi-view visual similar graph learning into a unified framework. To solve the objective function of AUMFS, a simple yet efficient iterative method is proposed. We apply AUMFS to three visual concept recognition applications (i.e., social image concept recognition, object recognition and video-based human action recognition) on four benchmark datasets. Experimental results show the proposed method significantly outperforms several state-of-the-art feature selection methods. More importantly, our method is not very sensitive to the parameters and the optimization method converges very fast.
机译:为了揭示和利用不同视图之间的相关信息和互补信息,近年来已经提出了大量的多视图学习算法。但是,由于缺乏可用于选择区分特征的数据标签,因此多视图学习中的无监督特征选择仍然是一个挑战。而且,大多数传统特征选择方法是针对单视图数据开发的,并且不能直接应用于多视图数据。因此,在本文中,我们提出了一种称为自适应无监督多视图特征选择(AUMFS)的无监督学习方法。 AUMFS尝试将原始数据中包含的三种重要信息(即数据簇结构,数据相似性和不同视图之间的相关性)一起用于特征选择。为了实现此目标,引入了具有l_(2,1)-范数罚分的鲁棒稀疏回归模型来预测数据聚类标签,同时构造了多个依赖于视图的视觉相似图以灵活地对视觉相似度进行建模在每个视图中。然后,AUMFS将数据聚类标签预测和自适应多视图视觉相似图学习集成到一个统一的框架中。为了解决AUMFS的目标函数,提出了一种简单而有效的迭代方法。我们将AUMFS应用于四个基准数据集上的三个视觉概念识别应用程序(即社交图像概念识别,对象识别和基于视频的人类动作识别)。实验结果表明,该方法明显优于几种最新的特征选择方法。更重要的是,我们的方法对参数不是很敏感,优化方法收敛得很快。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号