...
首页> 外文期刊>Robotics & Machine Learning Daily News >Reports on Machine Learning from University of Castilla La Mancha Provide New Insights (Lyapunov Stability for Detecting Adversarial Image Examples)
【24h】

Reports on Machine Learning from University of Castilla La Mancha Provide New Insights (Lyapunov Stability for Detecting Adversarial Image Examples)

机译:报告从大学机器学习卡斯蒂利亚拉曼查提供新见解(李雅普诺夫稳定检测敌对的形象例子)

获取原文
获取原文并翻译 | 示例

摘要

By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Current study results on Machine Learning have been published. According to news reporting out of Ciudad Real, Spain, by NewsRx editors, research stated, “Adversarial examples are a challenging threat to machine learning models in terms of trustworthiness and security. Using small perturbations to manipulate input data, it is possible to drive the decision of a deep learning model into failure, which can be catastrophic in applications like autonomous driving, security-surveillance or other critical systems that increasingly rely on machine learning technologies.”
机译:机器人技术与新闻记者新闻编辑机器学习每日新闻每日新闻——电流研究结果对机器学习出版。之真实,西班牙NewsRx编辑、研究说:“敌对的例子是一个挑战威胁的机器学习模型诚信和安全。扰动操作输入数据,它是可能驱动深度学习的决定模型转换为失败,可能是灾难性的应用自主驾驶,安全监控或其他关键系统越来越依赖机器学习技术。”

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号