...
首页> 外文期刊>Pattern recognition letters >Learning maximum excluding ellipsoids from imbalanced data with theoretical guarantees
【24h】

Learning maximum excluding ellipsoids from imbalanced data with theoretical guarantees

机译:从理论上保证从不平衡数据中排除椭球体的最大值

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we address the problem of learning from imbalanced data. We consider the scenario where the number of negative examples is much larger than the number of positive ones. We propose a theoretically-founded method which learns a set of local ellipsoids centered at the minority class examples while excluding the negative examples of the majority class. We address this task from a Mahalanobis-like metric learning point of view and we derive generalization guarantees on the learned metric using the uniform stability framework. Our experimental evaluation on classic benchmarks and on a proprietary dataset in bank fraud detection shows the effectiveness of our approach, particularly when the imbalancy is huge. (c) 2018 Elsevier B.V. All rights reserved.
机译:在本文中,我们解决了从不平衡数据中学习的问题。我们考虑以下情况:否定示例的数量远大于肯定示例的数量。我们提出一种理论上有根据的方法,该方法学习以少数群体示例为中心的一组局部椭球,同时排除多数群体的负面示例。我们从类似Mahalanobis的度量学习角度来解决此任务,并使用统一的稳定性框架对所学度量进行泛化保证。我们对经典基准测试和银行欺诈检测专有数据集的实验评估表明,我们的方法是有效的,特别是在巨大的不平衡情况下。 (c)2018 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号