首页> 外文会议>International Conference on Machine Learning >Re-revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method
【24h】

Re-revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method

机译:重新评估超图学习:置信区间和子缩影方法

获取原文

摘要

We revisit semi-supervised learning on hypergraphs. Same as previous approaches, our method uses a convex program whose objective function is not everywhere differentiable. We exploit the non-uniqueness of the optimal solutions, and consider confidence intervals which give the exact ranges that unlabeled vertices take in any optimal solution. Moreover, we give a much simpler approach for solving the convex program based on the subgradient method. Our experiments on real-world datasets confirm that our confidence interval approach on hyper-graphs outperforms existing methods, and our sub-gradient method gives faster running times when the number of vertices is much larger than the number of edges.
机译:我们在超图中重新审视半监督学习。与以前的方法相同,我们的方法使用凸面的凸面,其客观函数不是各处可分辨率。我们利用最佳解决方案的非唯一性,并考虑置信区间,该置信区间使未标记的顶点采用任何最佳解决方案的精确范围。此外,我们提供了一种更简单的方法,用于基于子缩放方法解决凸面的凸面。我们对实际数据集的实验确认我们对超图形的置信区间方法优于现有方法,并且我们的子梯度方法在顶点的数量大于边缘的数量时提供更快的运行时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号