首页> 外文期刊>Computer Methods in Applied Mechanics and Engineering >A non-cooperative meta-modeling game for automated third-party calibrating, validating and falsifying constitutive laws with parallelized adversarial attacks
【24h】

A non-cooperative meta-modeling game for automated third-party calibrating, validating and falsifying constitutive laws with parallelized adversarial attacks

机译:一种非合作元建模游戏,用于自动化第三方校准,验证和伪造具有平行化对抗攻击的本构规则

获取原文
获取原文并翻译 | 示例
           

摘要

The evaluation of constitutive models, especially for high-risk and high-regret engineering applications, requires efficient and rigorous third-party calibration, validation and falsification. While there are numerous efforts to develop paradigms and standard procedures to validate models, difficulties may arise due to the sequential, manual, and often biased nature of the commonly adopted calibration and validation processes, thus slowing down data collections, hampering the progress towards discovering new physics, increasing expenses and possibly leading to misinterpretations of the credibility and application ranges of proposed models. This work attempts to introduce concepts from game theory and machine learning techniques to overcome many of these existing difficulties. We introduce an automated meta-modeling game where two competing AI agents systematically generate experimental data to calibrate a given constitutive model and to explore its weakness such that the experiment design and model robustness can be improved through competitions. The two agents automatically search for the Nash equilibrium of the meta-modeling game in an adversarial reinforcement learning framework without human intervention. In particular, a protagonist agent seeks to find the more effective ways to generate data for model calibrations, while an adversary agent tries to find the most devastating test scenarios that expose the weaknesses of the constitutive model calibrated by the protagonist. By capturing all possible design options of the laboratory experiments into a single decision tree, we recast the design of experiments as a game of combinatorial moves that can be resolved through deep reinforcement learning by the two competing players. Our adversarial framework emulates idealized scientific collaborations and competitions among researchers to achieve a better understanding of the application range of the learned material laws and prevent misinterpretations caused by conventional AI-based third-party validation. Numerical examples are given to demonstrate the wide applicability of the proposed meta-modeling game with adversarial attacks on both human-crafted constitutive models and machine learning models. (C) 2020 Elsevier B.V. All rights reserved.
机译:本构模型的评估,特别是对于高风险和高遗憾的工程应用,需要高效且严谨的第三方校准,验证和伪造。虽然有许多努力开发范式和标准程序来验证模型,但由于顺序,手动和常用校准和验证过程的偏见性质,因此可能会出现困难,从而减慢数据收集,妨碍发现新的进展物理,增加费用,可能导致拟议模型的可信度和应用范围的误解。这项工作试图介绍博弈论和机器学习技术的概念,以克服许多这些现有的困难。我们介绍了一种自动化的元建模游戏,系统地产生两个竞争的AI代理,系统地产生实验数据以校准给定的本构模型,并探讨其弱点,使实验设计和模型稳健性可以通过竞争改善。这两种代理商在没有人为干预的情况下,在对抗的增强学习框架中自动搜索元建模游戏的纳入均衡。特别地,主角剂试图找到更有效的方法来生成模型校准的数据,而对手试图找到最毁灭的测试场景,该测试场景暴露主角校准的本构模型的弱点。通过将实验室实验的所有可能的设计选项捕获到单一决定树中,我们重新设计实验的设计作为组合动作的游戏,可以通过两个竞争球员通过深度加强学习来解决。我们的对抗框架模拟了研究人员中的理想化科学合作和竞争,以更好地了解学习材料法的应用范围,并防止常规AI的第三方验证引起的误解。给出了数值例子,展示了所提出的元建模游戏对人类构成模型和机器学习模型的对抗性攻击的广泛适用性。 (c)2020 Elsevier B.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号