首页> 外文会议>International Conference on Machine Learning >Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
【24h】

Revisiting Training Strategies and Generalization Performance in Deep Metric Learning

机译:在深度度量学习中重新审视培训策略和泛化绩效

获取原文

摘要

Deep Metric Learning (DML) is arguably one of the most influential lines of research for learning visual similarities with many proposed approaches every year. Although the field benefits from the rapid progress, the divergence in training protocols, architectures, and parameter choices make an unbiased comparison difficult. To provide a consistent reference point, we revisit the most widely used DML objective functions and conduct a study of the crucial parameter choices as well as the commonly neglected mini-batch sampling process. Under consistent comparison, DML objectives show much higher saturation than indicated by literature. Further based on our analysis, we uncover a correlation between the embedding space density and compression to the generalization performance of DML models. Exploiting these insights, we propose a simple, yet effective, training regularization to reliably boost the performance of ranking-based DML models on various standard benchmark datasets. Code and a publicly accessible WandB-repo are available at https://github.com/Confusezius/Revisiting_Deep_Metric_Learning_PyTorch.
机译:深度度量学习(DML)可以说是每年与许多提出的方法学习视觉相似性最具影响力的研究之一。虽然现场受益于快速进步,但训练协议,架构和参数选择的分歧使得难以偏见的比较。为了提供一致的参考点,我们重新审视最广泛使用的DML客观函数,并对关键参数选择进行研究以及通常被忽视的迷你批量采样过程。在一致的比较下,DML目标显示出比文献所示的更高饱和度。此外,根据我们的分析,我们发现嵌入空间密度与压缩与DML模型的泛化性能之间的相关性。利用这些见解,我们提出了一个简单但有效的培训正常化,可靠地提高了基于排名的DML模型在各种标准基准数据集上的性能。在https://github.com/confusezius/revisiting_deep_metric_learning_pytorch提供代码和可公开访问的Wandb-repo。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号