Negative samples, whose class labels are not included in training sets, are commonly classified into random classes with high confidence and this severely limits the applications of traditional models. To solve this problem, we propose an approach called Negative-Aware Training (NAT), which introduces negative samples and trains them along with the original training set. The object function of NAT forces the classifier to output equal probability for each class on negative samples, other settings stay unchanged. Moreover, we introduce NAT into GAN and propose NAT-GAN, in which discriminator distinguishes between both generated samples and negative samples. With the assist of NAT, NAT-GAN can find more accurate decision boundaries, thus converges steadier and faster. Experimental results on synthesis and real-word datasets demonstrate that: 1) NAT gets better performance on negative samples in accordance with our proposed negative confidence rate metric. 2) NAT-GAN gets better quality scores than several traditional GANs and achieves state-of-the-art Inception Score (9.2) on CIFAR 10. Our demo and code are available at https://natpaper.github.io.
展开▼