首页> 外文期刊>Audio, Speech, and Language Processing, IEEE Transactions on >Application of Deep Belief Networks for Natural Language Understanding
【24h】

Application of Deep Belief Networks for Natural Language Understanding

机译:深度信念网络在自然语言理解中的应用

获取原文
获取原文并翻译 | 示例
           

摘要

Applications of Deep Belief Nets (DBN) to various problems have been the subject of a number of recent studies ranging from image classification and speech recognition to audio classification. In this study we apply DBNs to a natural language understanding problem. The recent surge of activity in this area was largely spurred by the development of a greedy layer–wise pretraining method that uses an efficient learning algorithm called Contrastive Divergence (CD). CD allows DBNs to learn a multi-layer generative model from unlabeled data and the features discovered by this model are then used to initialize a feed-forward neural network which is fine-tuned with backpropagation. We compare a DBN-initialized neural network to three widely used text classification algorithms: Support Vector Machines (SVM), boosting and Maximum Entropy (MaxEnt). The plain DBN-based model gives a call–routing classification accuracy that is equal to the best of the other models. However, using additional unlabeled data for DBN pre–training and combining DBN–based learned features with the original features provides significant gains over SVMs, which, in turn, performed better than both MaxEnt and Boosting.
机译:深度信念网(DBN)在各种问题上的应用已成为许多近期研究的主题,从图像分类和语音识别到音频分类。在这项研究中,我们将DBN应用于自然语言理解问题。贪婪的分层预训练方法的发展极大地刺激了该领域最近的活动激增,该方法使用了一种称为“对比发散”(CD)的有效学习算法。 CD使DBN可以从未标记的数据中学习多层生成模型,然后使用该模型发现的特征来初始化前馈神经网络,并通过反向传播对其进行微调。我们将DBN初始化的神经网络与三种广泛使用的文本分类算法进行了比较:支持向量机(SVM),增强和最大熵(MaxEnt)。基于普通DBN的模型提供的呼叫路由分类准确度与其他模型相同。但是,使用其他未标记的数据进行DBN预训练,并将基于DBN的学习功能与原始功能相结合,可以比SVM获得显着的收益,而SVM的性能又优于MaxEnt和Boosting。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号