首页> 外文期刊>Nuclear science and engineering: the journal of the American Nuclear Society >Quantification of Deep Neural Network Prediction Uncertainties for VVUQ of Machine Learning Models
【24h】

Quantification of Deep Neural Network Prediction Uncertainties for VVUQ of Machine Learning Models

机译:Quantification of Deep Neural Network Prediction Uncertainties for VVUQ of Machine Learning Models

获取原文
获取原文并翻译 | 示例
       

摘要

Recent performance breakthroughs in artificial intelligence (AI) and machine learning (ML), especially advances in deep learning, the availability of powerful and easy-to-use ML libraries (e.g., scikit-learn, TensorFlow, PyTorch), and increasing computational power, have led to unprecedented interest in AI/ML among nuclear engineers. For physics-based computational models, verification, validation, and uncertainty quantification (VVUQ) processes have been very widely investigated, and many methodologies have been developed. However, VVUQ of ML models has been relatively less studied, especially in nuclear engineering. This work focuses on uncertainty quantification (UQ) of ML models as a preliminary step of ML VVUQ, more specifically Deep Neural Networks (DNNs) because they are the most widely used supervised ML algorithm for both regression and classification tasks. This work ai3ms at quantifying the prediction or approximation uncertainties of DNNs when they are used as surrogate models for expensive physical models. Three techniques for UQ of DNNs are compared, namely, Monte Carlo Dropout (MCD), Deep Ensembles (DE), and Bayesian Neural Networks (BNNs). Two nuclear engineering examples are used to benchmark these methods: (1) time-dependent fission gas release data using the Bison code and (2) void fraction simulation based on the Boiling Water Reactor Full-size Fine-Mesh Bundle Tests (BFBT) benchmark using the TRACE code. It is found that the three methods typically require different DNN architectures and hyperparameters to optimize their performance. The UQ results also depend on the amount of training data available and the nature of the data. Overall, all three methods can provide reasonable estimations of the approximation uncertainties. The uncertainties are generally smaller when the mean predictions are close to the test data while the BNN methods usually produce larger uncertainties than MCD and DE.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号