...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Dropout Inference in Bayesian Neural Networks with Alpha-divergences
【24h】

Dropout Inference in Bayesian Neural Networks with Alpha-divergences

机译:具有Alpha散度的贝叶斯神经网络中的丢失推断

获取原文
           

摘要

To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference approximations are needed. Dropout variational inference (VI) for example has been used for machine vision and medical applications, but VI can severely underestimates model uncertainty. Alpha-divergences are alternative divergences to VI’s KL objective, which are able to avoid VI’s uncertainty underestimation. But these are hard to use in practice: existing techniques can only use Gaussian approximating distributions, and require existing models to be changed radically, thus are of limited use for practitioners. We propose a re-parametrisation of the alpha-divergence objectives, deriving a simple inference technique which, together with dropout, can be easily implemented with existing models by simply changing the loss of the model. We demonstrate improved uncertainty estimates and accuracy compared to VI in dropout networks. We study our model’s epistemic uncertainty far away from the data using adversarial images, showing that these can be distinguished from non-adversarial images by examining our model’s uncertainty.
机译:为了使用现实世界的贝叶斯深度学习模型获得不确定性估计,需要实际的推论近似。例如,辍学变异推理(VI)已用于机器视觉和医疗应用,但VI可能严重低估了模型的不确定性。 Alpha偏差是VI的KL目标的替代偏差,可以避免VI的不确定性低估。但是,这些在实践中很难使用:现有技术只能使用高斯近似分布,并且需要对现有模型进行根本性的更改,因此对从业人员的使用是有限的。我们建议对α-散度目标进行重新参数化,以推导一种简单的推理技术,该技术与辍学一起可以通过简单地更改模型的损失而轻松地与现有模型一起实施。与辍学网络中的VI相比,我们证明了改进的不确定性估计和准确性。我们使用对抗性图像来研究模型的认知不确定性,使其远离数据,这表明通过检查模型的不确定性,可以将它们与非对抗性图像区分开。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号