首页> 外国专利> LANGUAGE MODEL APPROXIMATION LEARNING DEVICE, ITS METHOD AND STORAGE MEDIUM RECORDING APPROXIMATION LEARNING PROGRAM

LANGUAGE MODEL APPROXIMATION LEARNING DEVICE, ITS METHOD AND STORAGE MEDIUM RECORDING APPROXIMATION LEARNING PROGRAM

机译:语言模型近似学习装置,其方法和存储介质记录近似学习程序

摘要

PROBLEM TO BE SOLVED: To express by parameters of a number smaller than a true model and to learn an approximate model approximate to the true model by outputting a low-order n-gram approximate model approximate to a true n-gram model from learning data with Kullback Leibler(KL) information quantity as an evalua tion scale. SOLUTION: In the approximation learning device of a language model expressing by a parameter, an n-gram Bayesian learning means 100 receives a word group as learning data to calculate a Bayesian estimated value concerning KL information quantity with a true n-gram model corresponding to the language model to output an n-gram Bayesian estimated model. A low-order n-gram learning means 200 receives the n-gram Bayesian estimated model learned by the means 100 and calculates the low-order n-gram approximation model expressed by the number of parameters smaller than the n-gram Bayesian estimated model with the KL information quantity as the evaluating scale.
机译:解决的问题:通过小于真实模型的数字表示参数,并通过从学习数据中输出近似于真实n-gram模型的低阶n-gram近似模型来学习近似于真实模型的近似模型使用Kullback Leibler(KL)信息量作为评估量表。解决方案:在由参数表示的语言模型的近似学习设备中,n元语法贝叶斯学习装置100接收一个词组作为学习数据,以与KL信息量相对应的真实n元语法模型计算与KL信息量有关的贝叶斯估计值。语言模型以输出n克贝叶斯估计模型。低阶n-gram学习装置200接收由装置100学习的n-gram贝叶斯估计模型,并计算由小于n-gram贝叶斯估计模型的参数数量表示的低阶n-gram近似模型, KL信息量作为评价量表。

著录项

  • 公开/公告号JPH11296515A

    专利类型

  • 公开/公告日1999-10-29

    原文格式PDF

  • 申请/专利权人 NIPPON TELEGR & TELEPH CORP NTT;

    申请/专利号JP19980099488

  • 发明设计人 MAEDA YASUNARI;

    申请日1998-04-10

  • 分类号G06F17/27;G06F15/18;G10L3/00;

  • 国家 JP

  • 入库时间 2022-08-22 02:37:06

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号