Regularization methods play an important role in artificial neural networks training, improving generalizationperformance and preventing them from overfitting. In this paper, we introduce a new regularization method, based on theorthogonalization of convolutional layer filters. Proposed method is easy to implement and it has plug-and-playcompatibility with modern training approaches, without any changes or adaptations on their part. Experiments withMNIST and CIFAR10 datasets showed that the effectiveness of the suggested method depends on number of filters inthe layer, and maximum increase in quality is achieved for architectures with small number of parameters, which isimportant for training fast and lightweight neural networks.
展开▼