首页> 外文期刊>電子情報通信学会技術研究報告 >Squared-loss Mutual Information Regularization
【24h】

Squared-loss Mutual Information Regularization

机译:平方损失互信息正则化

获取原文
获取原文并翻译 | 示例
           

摘要

The information maximization principle is a useful alternative to the low-density separation principle and prefers probabilistic classifiers that maximize the mutual information (MI) between data and labels. In this paper, we propose an approach for semi-supervised learning called squared-loss mutual information (SMI) regularization, which replaces MI with a novel information measure SMI. SMI regularization is the first framework that can offer all these four properties to algorithms: analytical solution, out-of-sample and multi-class classification, and probabilistic output. As an information-theoretic framework, it is directly related to a manifold regularization, and results in learning algorithms with data-dependent risk bounds. Experiments demonstrate that SMI regularization compares favorably with existing approaches of information-theoretic regularization.
机译:信息最大化原则是低密度分离原则的有用替代方法,它更倾向于使用概率分类器,以最大化数据和标签之间的互信息(MI)。在本文中,我们提出了一种半监督学习的方法,称为平方损失互信息(SMI)正则化,它用一种新颖的信息量度SMI代替了MI。 SMI正则化是第一个可以为算法提供所有这四个属性的框架:分析解决方案,样本外和多类分类以及概率输出。作为信息理论框架,它直接与流形正规化相关,并导致学习算法具有与数据相关的风险界限。实验表明,SMI正则化与现有的信息理论正则化方法相比具有优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号