首页>
外文会议>AIAA SciTech forum and exposition
>Decoding the Black Box: Extracting Explainable Decision Boundary Approximations from Machine Learning Models for Real Time Safety Assurance of the National Airspace
【24h】
Decoding the Black Box: Extracting Explainable Decision Boundary Approximations from Machine Learning Models for Real Time Safety Assurance of the National Airspace
Although the field of machine learning has made unprecedented advances in recent years, there are many domains where its application is hampered by a lack of explainability behind a trained model's decisions. For example, in the domain of aviation safety assurance, it is of interest to not only determine that a potentially unsafe situation (degraded state) may arise in the future, but to also understand how/why it may arise (i.e., what its precursors are). In this paper, we present a method for converting a "black box" neural network model into an approximate explainable "white box" model that yields insight into the network's decision-making criteria. The method samples the neural network's decision boundary by perturbing inputs, and approximates it via a set of local hyperplanes with interpretable coefficients. We empirically demonstrate that both the neural network model (specifically, the recurrent long short-term memory network) and the hyperplane-based model achieve solid and comparable performance in terms of the ability to predict a specific type of degraded state well in advance of its occurrence. We also show that the hyperplane-based model reveals an intuitive precursor to the degraded state. In the future, our method can potentially be applied to analyze and approximate decision boundaries for other problems and/or other neural network architectures.
展开▼