首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
【24h】

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

机译:虚拟对抗训练:一种监督和半监督学习的正则化方法

获取原文
获取原文并翻译 | 示例

摘要

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VATachieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
机译:我们提出了一种基于虚拟对抗损失的新正则化方法:给定输入的条件标签分布的局部平滑度的新度量。虚拟对抗损失定义为围绕每个输入数据点的条件标签分布对局部扰动的鲁棒性。与对抗训练不同,我们的方法在没有标签信息的情况下定义了对抗方向,因此适用于半监督学习。因为我们使模型平滑的方向仅仅是“虚拟”对抗性的,所以我们将我们的方法称为虚拟对抗性训练(VAT)。增值税的计算成本相对较低。对于神经网络,可以使用不超过两对的正向传播和反向传播来计算虚拟对抗损失的近似梯度。在我们的实验中,我们将增值税应用于多个基准数据集上的监督和半监督学习任务。通过对基于熵最小化原理的算法进行简单增强,我们的VAT在SVHN和CIFAR-10上实现了半监督学习任务的最新性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号