【24h】

Negative-Aware Training: Be Aware of Negative Samples

机译:避免感知培训:意识到消极样本

获取原文

摘要

Negative samples, whose class labels are not included in training sets, are commonly classified into random classes with high confidence and this severely limits the applications of traditional models. To solve this problem, we propose an approach called Negative-Aware Training (NAT), which introduces negative samples and trains them along with the original training set. The object function of NAT forces the classifier to output equal probability for each class on negative samples, other settings stay unchanged. Moreover, we introduce NAT into GAN and propose NAT-GAN, in which discriminator distinguishes between both generated samples and negative samples. With the assist of NAT, NAT-GAN can find more accurate decision boundaries, thus converges steadier and faster. Experimental results on synthesis and real-word datasets demonstrate that: 1) NAT gets better performance on negative samples in accordance with our proposed negative confidence rate metric. 2) NAT-GAN gets better quality scores than several traditional GANs and achieves state-of-the-art Inception Score (9.2) on CIFAR 10. Our demo and code are available at https://natpaper.github.io.
机译:阴性样本,其类标签不包括在培训集中,通常被归类为随机课程,高信任,这严重限制了传统模型的应用。为了解决这个问题,我们提出了一种称为负感知培训(NAT)的方法,它引入了负样本并将它们列入原始训练集。 NAT的对象功能强制分类器对负样本上的每个类输出相同的概率,其他设置保持不变。此外,我们将NAT引入GAN并提出NAT-GAN,其中判别者区分生成的样本和阴性样本。通过NAT的助攻,NAT-GAN可以找到更准确的决策边界,从而汇聚稳定和更快。合成和实际数据集的实验结果表明:1)NAT根据我们提出的负置信度指标,在负样本上获得更好的性能。 2)NAT-GaN获得比几种传统GAN更好的质量分数,并在CiFar 10上实现最先进的初始成绩(9.2)。我们的演示和代码可在https://natpaper.github.io获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号