首页> 外文会议>International Joint Conference on Neural Networks >Learning with only multiple instance positive bags
【24h】

Learning with only multiple instance positive bags

机译:只使用多个实例正面袋子学习

获取原文

摘要

In traditional multiple instance learning (MIL), both positive and negative bags are required to learn a prediction function. However, a high human cost is needed to know the label of each bag-positive or negative. Only positive bags contain our focus (positive instances) while negative bags consist of noise or background (negative instances). So we do not expect to spend too much to label the negative bags. Contrary to our expectation, nearly all existing MIL methods require enough negative bags besides positive ones. In this paper we propose an algorithm called “Positive Multiple Instance” (PMI), which learns a classifier given only a set of positive bags. So the annotation of negative bags becomes unnecessary in our method. PMI is constructed based on the assumption that the unknown positive instances in positive bags be similar each other and constitute one compact cluster in feature space and the negative instances locate outside this cluster. The experimental results demonstrate that PMI achieves the performances close to or a little worse than those of the traditional MIL algorithms on benchmark and real data sets. However, the number of training bags in PMI is reduced significantly compared with traditional MIL algorithms.
机译:在传统的多实例学习(MIL)中,需要正负袋子来学习预测功能。然而,需要高人的成本来了解每个袋子阳性或阴性的标签。只有正袋子只包含我们的焦点(正面情况),而负袋由噪声或背景(负实例)组成。所以我们不希望花太多来标记负片。与我们的期望相反,几乎所有现有的MIL方法都需要足够的负袋除了积极的方法。在本文中,我们提出了一种称为“正多实例”(PMI)的算法,该算法仅为仅提供一组正袋给定分类器。因此,在我们的方法中,负袋的注释变得不必要。基于假设的假设构造了PMI,即正袋中的未知正面彼此相似并构成一个特征空间中的一个紧凑簇,并且负实例位于该群集之外。实验结果表明,PMI实现了比基准和实际数据集的传统密端程常调算法靠近或稍差的性能。然而,与传统的MIL算法相比,PMI中的训练袋的数量显着降低。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号