首页> 外文会议>INTERSPEECH 2012 >Likability Classification- A not so Deep Neural Network Approach
【24h】

Likability Classification- A not so Deep Neural Network Approach

机译:可爱分类 - 不是那么深的神经网络方法

获取原文

摘要

This papers presents results on the application of restricted Boltzmann machines (RBM) and deep belief networks (DBN) on the Likability Sub-Challenge of the Interspeech 2012 Speaker Trait Challenge [1]. RBMs are a particular form of log-linear Markov Random Fields and generative models which try to model the probability distribution of the underlying input data which can be trained in an unsupervised fashion. DBNs can be constructed by stacking RBMs and are known to yield an increasingly complex representation of the input data as the number of layers increases. Our results show that the Likability Sub-Challenge classification task does not benefit from the modeling power of DBN, but that the use of an RBM as the first stage of a two-layer neural network with subsequent fine-tuning improves the baseline result of 59.0 % to 64.0 %, i.e., a relative 8.5 % improvement of the unweighted average evaluation measure.
机译:本文提出了受限制的Boltzmann机器(RBM)和深度信仰网络(DBN)对Interspeech 2012扬声器特征挑战的可爱子挑战的应用程序的结果[1]。 RBMS是一种特定形式的Log-Linear Markov随机字段和生成模型,其尝试模拟可以以无监督的方式培训的底层输入数据的概率分布。可以通过堆叠RBM来构造DBN,并且已知随着层数增加,产生对输入数据的越来越复杂的表示。我们的研究结果表明,可爱子挑战分类任务不受DBN的建模力,但是使用RBM作为双层神经网络的第一阶段,随后的微调提高了59.0的基线结果%〜64.0%,即相对8.5%改善未加权平均评估措施。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号