首页> 外文会议>Asian Symposium on Programming Languages and Systems >Neural Networks, Secure by Construction: An Exploration of Refinement Types
【24h】

Neural Networks, Secure by Construction: An Exploration of Refinement Types

机译:神经网络,通过建设安全:探索改进类型

获取原文

摘要

We present StarChild and Lazuli, two libraries which leverage refinement types to verify neural networks, implemented in F* and Liquid Haskell. Refinement types are types augmented, or refined, with assertions about values of that type, e.g. "integers greater than five", which are checked by an SMT solver. Crucially, these assertions are written in the language itself. A user of our library can refine the type of neural networks, e.g. "neural networks which are robust against adversarial attacks", and expect F* to handle the verification of this claim for any specific network, without having to change the representation of the network, or even having to learn about SMT solvers. Our initial experiments indicate that our approach could greatly reduce the burden of verifying neural networks. Unfortunately, they also show that SMT solvers do not scale to the sizes required for neural network verification.
机译:我们介绍Starchild和Lazuli,这两个图书馆利用细化类型来验证神经网络,在F *和Liquid Haskell实现。 细化类型是增强或改进的类型,具有关于该类型的值的断言,例如, “大于五个的整数”,由SMT求解器检查。 至关重要,这些断言是用语言本身写的。 我们的图书馆的用户可以改进神经网络的类型,例如, “对于对抗性攻击的鲁棒性的”神经网络“,并期望F *处理对任何特定网络的这种要求的验证,而无需改变网络的表示,甚至必须了解SMT求解器。 我们的初步实验表明,我们的方法可以大大减少验证神经网络的负担。 不幸的是,他们还表明SMT求解器不会扩展到神经网络验证所需的尺寸。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号