首页> 外文学位 >Preventing Overfitting in Deep Learning Using Differential Privacy
【24h】

Preventing Overfitting in Deep Learning Using Differential Privacy

机译:使用差异性隐私防止深度学习过拟合

获取原文
获取原文并翻译 | 示例

摘要

The use of Deep Neural Network based systems in the real world is growing. They have achieved state-of-the-art performance on many image, speech and text datasets. They have been shown to be powerful systems that are capable of learning detailed relationships and abstractions from the data. This is a double-edged sword which makes such systems vulnerable to learning the noise in the training set, thereby negatively impacting performance. This is also known as the problem of overfitting or poor generalization. In a practical setting, analysts typically have limited data to build models that must generalize to unseen data. In this work, we explore the use of a differential-privacy based approach to improve generalization in Deep Neural Networks.
机译:在现实世界中,基于深度神经网络的系统的使用正在增长。他们在许多图像,语音和文本数据集上都取得了最先进的性能。它们被证明是功能强大的系统,能够从数据中学习详细的关系和抽象。这是一把双刃剑,使此类系统容易受到训练集中的噪声的学习,从而对性能产生负面影响。这也称为过度拟合或泛化不佳的问题。在实际情况下,分析人员通常只有有限的数据才能构建必须泛化为看不见的数据的模型。在这项工作中,我们探索使用基于差异隐私的方法来改善深度神经网络中的泛化性。

著录项

  • 作者单位

    State University of New York at Buffalo.;

  • 授予单位 State University of New York at Buffalo.;
  • 学科 Computer science.;Information science.;Computer engineering.
  • 学位 M.S.
  • 年度 2017
  • 页码 51 p.
  • 总页数 51
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

  • 入库时间 2022-08-17 11:38:55

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号