首页> 外文期刊>Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on >Mining Visual Collocation Patterns via Self-Supervised Subspace Learning
【24h】

Mining Visual Collocation Patterns via Self-Supervised Subspace Learning

机译:通过自我监督子空间学习挖掘视觉搭配模式

获取原文
获取原文并翻译 | 示例

摘要

Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.
机译:传统的文本数据挖掘技术不能直接应用于包含空间信息并具有高维视觉特征的图像数据。从图像中发现有意义的视觉图案并不是一件容易的事,因为视觉数据中的内容变化和空间依赖性极大地挑战了大多数现有的数据挖掘方法。本文提出了一种新颖的方法来应对这些困难,以挖掘视觉搭配模式。具体而言,这项工作的新颖性在于以下新的贡献:1)基于频繁项集挖掘的视觉搭配模式发现的原理性解决方案,以及2)通过反馈已发现的信息来完善视觉密码本的自我监督子空间学习方法通过子空间学习的模式。实验结果表明,我们的方法可以有效地发现语义上有意义的模式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号