...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Precise Tradeoffs in Adversarial Training for Linear Regression
【24h】

Precise Tradeoffs in Adversarial Training for Linear Regression

机译:线性回归对抗训练的精确权衡

获取原文
           

摘要

Despite breakthrough performance, modern learning models are known to be highly vulnerable to small adversarial perturbations in their inputs. While a wide variety of recent emph{adversarial training} methods have been effective at improving robustness to perturbed inputs (robust accuracy), often this benefit is accompanied by a decrease in accuracy on benign inputs (standard accuracy), leading to a tradeoff between often competing objectives. Complicating matters further, recent empirical evidence suggest that a variety of other factors (size and quality of training data, model size, etc.) affect this tradeoff in somewhat surprising ways. In this paper we provide a precise and comprehensive understanding of the role of adversarial training in the context of linear regression with Gaussian features. In particular, we characterize the fundamental tradeoff between the accuracies achievable by any algorithm regardless of computational power or size of the training data. Furthermore, we precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach in a high-dimensional regime where the number of data points and the parameters of the model grow in proportion to each other. Our theory for adversarial training algorithms also facilitates the rigorous study of how a variety of factors (size and quality of training data, model overparametrization etc.) affect the tradeoff between these two competing accuracies.
机译:尽管突破性表现,但众所周知,现代学习模型可能对其投入中的小对抗扰动感到高度脆弱。虽然最近的各种各样的 emph {普发出现训练}方法已经有效地改善了对扰动投入的稳健性(鲁棒精度),但是这种益处伴随着对良性投入(标准精度)的准确性降低,导致之间的权衡通常竞争目标。最近的经验证据表明,最近的经验证据表明,各种其他因素(培训数据的规模和质量,型号等)以某种令人惊讶的方式影响这个权衡。在本文中,我们提供了对具有高斯特征线性回归背景下对抗性培训的作用的精确和全面的理解。特别是,无论培训数据的计算能力或大小如何,我们都会在任何算法可实现的准确性之间的特征。此外,我们精确地表征了通过在高维制度中实现的标准/鲁棒精度和通过当代迷你越野训练方法实现的相应权衡,其中模型的数据点数和参数彼此成比例地增长。我们对抗性培训算法的理论还促进了对多种因素(培训数据的规模和质量,模型过度分化等)的严格研究影响这两个竞争准确性之间的权衡。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号