首页> 外文会议>AIAA SciTech Forum and Exposition >Improving Trust in Deep Neural Networks with Nearest Neighbors
【24h】

Improving Trust in Deep Neural Networks with Nearest Neighbors

机译:增进与最近邻居的深度神经网络的信任

获取原文

摘要

Deep neural networks are used increasingly for perception and decision-making in UAVs. For example, they can be used to recognize objects from images and decide what actions the vehicle should take. While deep neural networks can perform very well at complex tasks, their decisions may be unintuitive to a human operator. When a human disagrees with a neural network prediction, due to the black box nature of deep neural networks, it can be unclear whether the system knows something the human does not or whether the system is malfunctioning. This uncertainty is problematic when it comes to ensuring safety. As a result, it is important to develop technologies for explaining neural network decisions for trust and safety. This paper explores a modification to the deep neural network classification layer to produce both a predicted label and an explanation to support its prediction. Specifically, at test time, we replace the final output layer of the neural network classifier by a A -nearest neighbor classifier. The nearest neighbor classifier produces 1) a predicted label through voting and 2) the nearest neighbors involved in the prediction, which represent the most similar examples from the training dataset. Because prediction and explanation are derived from the same underlying process, this approach guarantees that the explanations are always relevant to the predictions. We demonstrate the approach on a convolutional neural network for a UAV image classification task. We perform experiments using a forest trail image dataset and show empirically that the hybrid classifier can produce intuitive explanations without loss of predictive performance compared to the original neural network. We also show how the approach can be used to help identify potential issues in the network and training process.
机译:深度神经网络越来越多地用于无人机的感知和决策。例如,它们可用于识别图像中的物体并确定车辆应采取的动作。虽然深层神经网络在复杂任务上可以表现出色,但其决策对于操作人员而言可能是不直观的。当人不同意神经网络的预测时,由于深度神经网络的黑匣子性质,可能不清楚系统是否知道人不知道的东西或系统是否正在发生故障。当涉及确保安全性时,这种不确定性是有问题的。因此,开发用于解释信任和安全性的神经网络决策的技术非常重要。本文探讨了对深度神经网络分类层的修改,以产生预测标签和支持其预测的解释。具体来说,在测试时,我们用A-最近邻分类器替换了神经网络分类器的最终输出层。最近的邻居分类器产生1)通过投票产生的预测标签,以及2)预测中涉及的最近的邻居,它们代表训练数据集中最相似的示例。由于预测和解释来自相同的基础过程,因此该方法保证了解释始终与预测相关。我们演示了在卷积神经网络上进行无人机图像分类任务的方法。我们使用森林步道图像数据集进行实验,并从经验上证明,与原始神经网络相比,混合分类器可以产生直观的解释而不会损失预测性能。我们还将展示如何使用该方法来帮助确定网络和培训过程中的潜在问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号