首页> 外文期刊>Image Processing, IEEE Transactions on >Decomposition-Based Transfer Distance Metric Learning for Image Classification
【24h】

Decomposition-Based Transfer Distance Metric Learning for Image Classification

机译:基于分解的传输距离度量学习用于图像分类

获取原文
获取原文并翻译 | 示例

摘要

Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.
机译:距离度量学习(DML)是图像分析和模式识别的关键因素。为了学习目标任务的鲁棒距离度量,我们需要大量的辅助信息(即,对标记数据的相似/不相似成对约束),由于较高的标记成本,通常在实践中不可用。本文通过从某些相关但不同的源任务中利用大量辅助信息来考虑目标度量学习(仅包含少量辅助信息),从而考虑了转移学习的设置。在这种情况下,最新的度量学习算法通常会失败,因为源任务和目标任务的数据分布通常完全不同。我们通过假设目标距离度量位于源度量(或其他随机生成的碱基)的特征向量所跨越的空间中来解决此问题。目标指标表示为基本指标的组合,这些基本指标是使用源指标的分解成分(或简单地是一组随机基础)计算得出的;我们称该方法为基于分解的传输DML(DTDML)。具体而言,DTDML通过强制目标指标接近源指标的集成来学习基本指标的稀疏组合以构建目标指标。与现有的传输度量学习方法相比,该方法的主要优点是我们可以直接学习基本度量系数而不是目标度量。为此,需要学习的变量要少得多。因此,鉴于有限的辅助信息,我们将获得更可靠的解决方案,而且优化速度往往会更快。对流行的手写图像(数字,字母)分类和挑战自然图像注释任务的实验证明了该方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号