首页> 外文会议>European conference on machine learning and knowledge discovery in databases >Lifted Online Training of Relational Models with Stochastic Gradient Methods
【24h】

Lifted Online Training of Relational Models with Stochastic Gradient Methods

机译:随机梯度方法提升了关系模型的在线训练

获取原文
获取外文期刊封面目录资料

摘要

Lifted inference approaches have rendered large, previously intractable probabilistic inference problems quickly solvable by employing symmetries to handle whole sets of indistinguishable random variables. Still, in many if not most situations training relational models will not benefit from lifting: symmetries within models easily break since variables become correlated by virtue of depending asymmetrically on evidence. An appealing idea for such situations is to train and recom-bine local models. This breaks long-range dependencies and allows to exploit lifting within and across the local training tasks. Moreover, it naturally paves the way for online training for relational models. Specifically, we develop the first lifted stochastic gradient optimization method with gain vector adaptation, which processes each lifted piece one after the other. On several datasets, the resulting optimizer converges to the same quality solution over an order of magnitude faster, simply because unlike batch training it starts optimizing long before having seen the entire mega-example even once.
机译:提升的推理方法通过使用对称性来处理无法区分的随机变量的完整集合,从而迅速解决了以前难以解决的大概率推理问题。尽管如此,在许多情况下(即使不是大多数情况下),培训关系模型也不会从提升中受益:模型中的对称性很容易破坏,因为变量由于不对称地依赖于证据而变得相关。在这种情况下,一个吸引人的想法是训练和重组本地模型。这打破了长期的依赖关系,并允许在本地培训任务之内和之间进行提升。而且,它自然为关系模型的在线培训铺平了道路。具体来说,我们开发了第一种具有增益矢量自适应的提升随机梯度优化方法,该方法可以依次处理每个提升块。在几个数据集上,最终的优化器以一个数量级的速度收敛到相同质量的解决方案,这仅仅是因为与批处理训练不同,它在发现整个大型示例甚至一次之前就开始进行优化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号