...
首页> 外文期刊>IEEE Network: The Magazine of Computer Communications >When Deep Learning Meets Differential Privacy: Privacy,Security, and More
【24h】

When Deep Learning Meets Differential Privacy: Privacy,Security, and More

机译:When Deep Learning Meets Differential Privacy: Privacy,Security, and More

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Over the past decade, we have witnessed unprecedented development in deep learning (DL) and its contributions to modern networking systems. Along with its wide adoption, however, are growing concerns over the broad attack surfaces toward learning systems and the intrinsic vulnerabilities on privacy, security, robustness, and more. As a countermeasure to mitigate the threats or formalize a better defense, a widely adopted approach is to introduce a certain level of random perturbation (a.k.a. calibrated artificial noise) at either the training or prediction phase. Noteworthy examples include effective defenses against model inference attacks and notions of certified robustness. As such, differential privacy (DP), originally established as a privacy-preserving framework for data publishing, has drawn great interest from the learning community. Given a target utility and the acceptable trade-off, DP's formalization on the amount of noise needed has been shown to be widely applicable to a broad range of DL vulnerability mitigations. In this article, we present to our readers the recent representative advancements intersecting DL and DP, ranging from privacy enhancements for DL systems to security and robustness improvements and other novel extensions. Furthermore, we discuss the ongoing challenges and propose a number of future directions where DP has great potential to positively contribute to future DL systems.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号