首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Adversarial Variational Optimization of Non-Differentiable Simulators
【24h】

Adversarial Variational Optimization of Non-Differentiable Simulators

机译:不可分模拟器的对抗变分优化

获取原文
           

摘要

Complex computer simulators are increasingly used across fields of science as generative models tying parameters of an underlying theory to experimental observations. Inference in this setup is often difficult, as simulators rarely admit a tractable density or likelihood function. We introduce Adversarial Variational Optimization (AVO), a likelihood-free inference algorithm for fitting a non-differentiable generative model incorporating ideas from generative adversarial networks, variational optimization and empirical Bayes. We adapt the training procedure of generative adversarial networks by replacing the differentiable generative network with a domain-specific simulator. We solve the resulting non-differentiable minimax problem by minimizing variational upper bounds of the two adversarial objectives. Effectively, the procedure results in learning a proposal distribution over simulator parameters, such that the JS divergence between the marginal distribution of the synthetic data and the empirical distribution of observed data is minimized. We evaluate and compare the method with simulators producing both discrete and continuous data.
机译:复杂的计算机模拟器越来越多地在科学领域中用作将基础理论的参数与实验观察联系起来的生成模型。由于仿真器很少接受易处理的密度或似然函数,因此通常很难进行这种推断。我们介绍了对抗变分优化(AVO),这是一种无可能性的推理算法,用于拟合非差分生成模型,该模型融合了来自生成对抗网络,变分优化和经验贝叶斯的思想。我们通过用领域特定的模拟器替换可分化的生成网络来适应生成对抗网络的训练程序。我们通过最小化两个对抗目标的变化上限来解决由此产生的不可微极小极大问题。有效地,该过程导致学习关于模拟器参数的提议分布,从而使合成数据的边际分布与观察到的数据的经验分布之间的JS差异最小化。我们评估该方法并将其与产生离散和连续数据的模拟器进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号