首页> 外文OA文献 >Latent structured perceptrons for large-scale learning with hidden information
【2h】

Latent structured perceptrons for large-scale learning with hidden information

机译:具有隐藏信息的大规模学习的潜在结构化感知器

摘要

Many real-world data mining problems contain hidden information (e.g., unobservable latent dependencies). We propose a perceptron-style method, latent structured perceptron, for fast discriminative learning of structured classification with hidden information. We also give theoretical analysis and demonstrate good convergence properties of the proposed method. Our method extends the perceptron algorithm for the learning task with hidden information, which can be hardly captured by traditional models. It relies on Viterbi decoding over latent variables, combined with simple additive updates. We perform experiments on one synthetic data set and two real-world structured classification tasks. Compared to conventional nonlatent models (e.g., conditional random fields, structured perceptrons), our method is more accurate on real-world tasks. Compared to existing heavy probabilistic models of latent variables (e.g., latent conditional random fields), our method lowers the training cost significantly (almost one order magnitude faster) yet with comparable or even superior classification accuracy. In addition, experiments demonstrate that the proposed method has good scalability on large-scale problems.
机译:许多现实世界中的数据挖掘问题都包含隐藏信息(例如,不可观察的潜在依赖关系)。我们提出了一种感知器风格的方法,即潜在的结构化感知器,用于对具有隐藏信息的结构化分类进行快速判别学习。我们还给出了理论分析,并证明了该方法的良好收敛性。我们的方法将感知器算法扩展到具有隐藏信息的学习任务,而传统模型很难捕获这些信息。它依靠对潜在变量进行维特比解码,并结合简单的累加更新。我们对一个合成数据集和两个现实世界中的结构化分类任务进行实验。与传统的非潜伏模型(例如条件随机场,结构化感知器)相比,我们的方法在实际任务中更加准确。与现有的潜在变量的重概率模型(例如,潜在条件随机字段)相比,我们的方法显着降低了训练成本(快了近一个数量级),但分类精度却相当甚至更高。此外,实验表明,该方法在大规模问题上具有良好的可扩展性。

著录项

  • 作者

    Sun X; Matsuzaki T; Li W;

  • 作者单位
  • 年度 2013
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号