首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Actively Avoiding Nonsense in Generative Models
【24h】

Actively Avoiding Nonsense in Generative Models

机译:积极避免生成模型中的废话

获取原文
       

摘要

A generative model may generate utter nonsense when it is fit to maximize the likelihood of observed data. This happens due to “model error,” i.e., when the true data generating distribution does not fit within the class of generative models being learned. To address this, we propose a model of active distribution learning using a binary invalidity oracle that identifies some examples as clearly invalid, together with random positive examples sampled from the true distribution. The goal is to maximize the likelihood of the positive examples subject to the constraint of (almost) never generating examples labeled invalid by the oracle. Guarantees are agnostic compared to a class of probability distributions. We first show that proper learning may require exponentially many queries to the invalidity oracle. We then give an improper distribution learning algorithm that uses only polynomially many queries.
机译:当生成模型适合于最大化观察数据的可能性时,它可能会产生完全的废话。这是由于“模型错误”而发生的,即,当真正的数据生成分布不适合正在学习的生成模型的类别时。为了解决这个问题,我们提出了一种使用二进制无效性预言的主动分布学习模型,该模型将一些示例标识为明显无效,并从真实分布中抽取随机的正例。目标是在(几乎)永远不会生成由oracle标记为无效的示例的约束下,最大化正例的可能性。与一类概率分布相比,保证是不可知的。我们首先表明,正确的学习可能需要对无效预言器进行成倍的查询。然后,我们给出了不正确的分布学习算法,该算法仅使用多项式许多查询。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号