首页> 外文会议>International Symposium on Chinese Spoken Language Processing >On training bi-directional neural network language model with noise contrastive estimation
【24h】

On training bi-directional neural network language model with noise contrastive estimation

机译:探讨噪声对比估计的双向神经网络语言模型

获取原文
获取外文期刊封面目录资料

摘要

Although uni-directional recurrent neural network language model(RNNLM) has been very successful, it's hard to train a bi-directional RNNLM properly due to the generative nature of language model. In this work, we propose to train bi-directional RNNLM with noise contrastive estimation(NCE), since the properities of NCE training will help the model to acheieve sentence-level normalization. Experiments are conducted on two hand-crafted tasks on the PTB data set: a rescore task and a sanity test. Although(regretfully), the model trained by NCE did not out-perform the baseline uni-directional NNLM, it is shown that NCE-trained bi-directional NNLM behaves well in the sanity test and outperformed the one trained by conventional maximum likelihood training on the rescore task.
机译:虽然单向复发性神经网络语言模型(RNNLM)非常成功,但由于语言模型的生成性质,难以训练双向RNNLM。在这项工作中,我们建议培训具有噪声对比估计(NCE)的双向RNNLM,因为NCE培训的正确性将有助于模型实现句子级标准化。实验在PTB数据集的两种手工制作任务上进行:Rescore任务和理智测试。虽然(令人遗憾的是),NCE训练的模型并未出现基线单向NNLM,但显示NCE培训的双向NNLM在理智测试中表现良好,并且优于传统最大似然训练的训练训练Rescore任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号