首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Learning Diffusion using Hyperparameters
【24h】

Learning Diffusion using Hyperparameters

机译:使用超参数学习扩散

获取原文
           

摘要

In this paper we advocate for a hyperparametric approach to learn diffusion in the independent cascade (IC) model. The sample complexity of this model is a function of the number of edges in the network and consequently learning becomes infeasible when the network is large. We study a natural restriction of the hypothesis class using additional information available in order to dramatically reduce the sample complexity of the learning process. In particular we assume that diffusion probabilities can be described as a function of a global hyperparameter and features of the individuals in the network. One of the main challenges with this approach is that training a model reduces to optimizing a non-convex objective. Despite this obstacle, we can shrink the best-known sample complexity bound for learning IC by a factor of |E|/d where |E| is the number of edges in the graph and d is the dimension of the hyperparameter. We show that under mild assumptions about the distribution generating the samples one can provably train a model with low generalization error. Finally, we use large-scale diffusion data from Facebook to show that a hyperparametric model using approximately 20 features per node achieves remarkably high accuracy.
机译:在本文中,我们主张采用超参数方法来学习独立级联(IC)模型中的扩散。该模型的样本复杂度是网络中边缘数量的函数,因此当网络较大时,学习变得不可行。我们使用可用的其他信息来研究假设类别的自然限制,以显着降低学习过程的样本复杂性。特别地,我们假设扩散概率可以描述为全局超参数和网络中个体特征的函数。这种方法的主要挑战之一是训练模型可以简化为优化非凸面目标。尽管有这个障碍,我们也可以将学习IC的最著名的样本复杂度缩小| E | / d,其中| E |是图中边的数量,d是超参数的尺寸。我们表明,在关于生成样本的分布的温和假设下,可以证明可以训练出具有低泛化误差的模型。最后,我们使用来自Facebook的大规模扩散数据显示,每个节点使用大约20个特征的超参数模型可以实现极高的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号