首页> 外文会议>IEEE/ACM International Conference on Automated Software Engineering >NEURODIFF: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation
【24h】

NEURODIFF: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation

机译:neurodiff:使用细粒度近似的神经网络可扩展差异验证

获取原文

摘要

As neural networks make their way into safety-critical systems, where misbehavior can lead to catastrophes, there is a growing interest in certifying the equivalence of two structurally similar neural networks - a problem known as differential verification. For example, compression techniques are often used in practice for deploying trained neural networks on computationally- and energy-constrained devices, which raises the question of how faithfully the compressed network mimics the original network. Unfortunately, existing methods either focus on verifying a single network or rely on loose approximations to prove the equivalence of two networks. Due to overly conservative approximation, differential verification lacks scalability in terms of both accuracy and computational cost. To overcome these problems, we propose NEURODIFF, a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification on feed-forward ReLU networks while achieving many orders-of-magnitude speedup. NEURODIFF has two key contributions. The first one is new convex approximations that more accurately bound the difference of two networks under all possible inputs. The second one is judicious use of symbolic variables to represent neurons whose difference bounds have accumulated significant error. We find that these two techniques are complementary, i.e., when combined, the benefit is greater than the sum of their individual benefits. We have evaluated NEURODIFF on a variety of differential verification tasks. Our results show that NEURODIFF is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
机译:由于神经网络进入安全关键系统,在不当行为可能导致灾难的情况下,对认证两个结构相似的神经网络的等价越来越感兴趣 - 一种称为差异验证的问题。例如,压缩技术通常用于在实践中用于在计算和能量受限设备上部署培训的神经网络,这提出了压缩网络如何模拟原始网络的问题。不幸的是,现有方法侧重于验证单个网络或依靠松散的近似来证明两个网络的等价。由于过于保守的近似,差分验证在精度和计算成本方面缺乏可扩展性。为了克服这些问题,我们提出了神经源性,一种象征性和细粒度的近似技术,其大大提高了馈线验证对前馈Relu网络上的差异验证的准确性,同时实现了许多幅度的加速。 Neurodiciff有两个主要贡献。第一个是新的凸近似,更准确地绑定了在所有可能的输入下的两个网络的差异。第二个是明智地使用符号变量来表示差异界限的神经元积累了显着误差。我们发现这两种技术是互补的,即,当合并时,利益大于其个人福利的总和。我们对各种差分验证任务进行了评估了神经源性。我们的研究结果表明,Neurodiff高达1000倍,比最先进的工具更准确,5倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号