In this paper, we propose a novel learning method for image classificationcalled Between-Class learning (BC learning). We generate between-class imagesby mixing two images belonging to different classes with a random ratio. Wethen input the mixed image to the model and train the model to output themixing ratio. BC learning has the ability to impose a constraint on the shapeof the feature distributions, and thus the generalization ability is improved.BC learning is originally a method developed for sounds, which can be digitallymixed. Mixing two image data does not appear to make sense; however, we arguethat because convolutional neural networks have an aspect of treating inputdata as waveforms, what works on sounds must also work on images. First, wepropose a simple mixing method using internal divisions, which surprisinglyproves to significantly improve performance. Second, we propose a mixing methodthat treats the images as waveforms, which leads to a further improvement inperformance. As a result, we achieved 19.4% and 2.26% top-1 errors onImageNet-1K and CIFAR-10, respectively.
展开▼