首页> 外文会议>Australasian Joint Conference on Artificial Intelligence >A Gradient-Based Metric Learning Algorithm for k-NN Classifiers
【24h】

A Gradient-Based Metric Learning Algorithm for k-NN Classifiers

机译:k-nn分类器的梯度基度量学习算法

获取原文

摘要

The Nearest Neighbor (NN) classification/regression techniques, besides their simplicity, are amongst the most widely applied and well studied techniques for pattern recognition in machine learning. A drawback, however, is the assumption of the availability of a suitable metric to measure distances to the k nearest neighbors. It has been shown that k-NN classifiers with a suitable distance metric can perform better than other, more sophisticated, alternatives such as Support Vector Machines and Gaussian Process classifiers. For this reason, much recent research in k-NN methods has focused on metric learning, i.e. finding an optimized metric. In this paper we propose a simple gradient-based algorithm for metric learning. We discuss in detail the motivations behind metric learning, i.e. error minimization and margin maximization. Our formulation differs from the prevalent techniques in metric learning, where the goal is to maximize the classifier's margin. Instead our proposed technique (MEGM) finds an optimal metric by directly minimizing the mean square error. Our technique not only results in greatly improved k-NN performance, but also performs better than competing metric learning techniques. Promising results are reported on major UCIML databases.
机译:除了简单性之外,最近的邻居(NN)分类/回归技术是在机器学习中最广泛应用和良好的研究识别技术之一。然而,缺点是假设适当度量的可用性来测量与K最近邻居的距离。已经证明,具有合适距离度量的K-NN分类器可以比其他更复杂的替代品更好,例如支持向量机和高斯过程分类器。出于这个原因,k-nn方法的最新研究专注于度量学习,即找到优化的指标。本文提出了一种简单的基于梯度的度量学习算法。我们详细讨论了度量学习背后的动机,即最小化和边距最大化。我们的配方与公制学习中的普遍技术不同,目标是最大化分类器的余量。相反,我们所提出的技术(MEGM)通过直接最小化平均方误差,找到了最佳度量。我们的技术不仅导致k-nn性能大大提高,而且表现优于竞争度的度量学习技术。有希望的结果是关于主要UCIML数据库的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号