首页> 外文期刊>Neurocomputing >Smooth approximation method for non-smooth empirical risk minimization based distance metric learning
【24h】

Smooth approximation method for non-smooth empirical risk minimization based distance metric learning

机译:基于非平滑经验风险最小化的距离度量学习的平滑逼近方法

获取原文
获取原文并翻译 | 示例
       

摘要

Distance metric learning (DML) has become a very active research field in recent years. Bian and Tao (IEEE Trans. Neural Netw. Learn. Syst. 23(8) (2012) 1194-1205) presented a constrained empirical risk minimization (ERM) framework for DML In this paper, we utilize smooth approximation method to make their algorithm applicable to the non-differentiable hinge loss function. We show that the objective function with hinge loss is equivalent to a non-smooth min-max representation, from which an approximate objective function is derived. Compared to the original objective function, the approximate one becomes differentiable with Lipschitz-continuous gradient. Consequently, Nesterov's optimal first-order method can be directly used. Finally, the effectiveness of our method is evaluated on various UCI datasets.
机译:近年来,距离度量学习(DML)已成为非常活跃的研究领域。 Bian和Tao(IEEE Trans.Neural Netw.Learn.Syst.23(8)(2012)1194-1205)提出了一种DML的受限经验风险最小化(ERM)框架。适用于不可微铰链损失功能。我们表明具有铰链损耗的目标函数等效于非光滑的最小-最大表示,从中可以得出近似的目标函数。与原始目标函数相比,近似值可以通过Lipschitz连续梯度进行微分。因此,可以直接使用Nesterov的最佳一阶方法。最后,在各种UCI数据集上评估了我们方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号