首页> 外文会议>International Symposium on Neural Networks >Uncertainty Estimation via Stochastic Batch Normalization
【24h】

Uncertainty Estimation via Stochastic Batch Normalization

机译:通过随机批次归一化进行不确定度估计

获取原文

摘要

In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation. We propose a probabilistic model and show that Batch Normalization maximizes the lower bound of its marginal log-likelihood. Then, according to the new probabilistic model, we design an algorithm which acts consistently during train and test. However, inference becomes computationally inefficient. To reduce memory and computational cost, we propose Stochastic Batch Normalization an efficient approximation of proper inference procedure. This method provides us with a scalable uncertainty estimation technique. We demonstrate the performance of Stochastic Batch Normalization on popular architectures (including deep convolutional architectures: VGG-like and ResNets) for MNIST and CIFAR-10 datasets.
机译:在这项工作中,我们研究了批归一化技术并提出了概率解释。我们提出了一个概率模型,并表明“批次归一化”最大化了其边际对数可能性的下限。然后,根据新的概率模型,我们设计了一种在训练和测试过程中始终如一的算法。但是,推论在计算上变得效率低下。为了减少内存和计算成本,我们建议随机批归一化是适当推断程序的有效近似。这种方法为我们提供了可扩展的不确定性估计技术。我们在MNIST和CIFAR-10数据集的流行体系结构(包括深度卷积体系结构:类似于VGG和ResNets)上演示了随机批量归一化的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号