首页> 外国专利> LANGUAGE MODEL TRAINING METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM

LANGUAGE MODEL TRAINING METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM

机译:语言模型训练方法,装置,电子设备和可读存储介质

摘要

The present disclosure provides a method for training language model, and associated apparatus, electronic device and readable storage medium, which relates to the technical field of deep learning and the technical field of natural language processing. A specific implementation solution is as follows: sampling a paragraph of text from each article in a plurality of articles respectively, to obtain multiple paragraphs of text; concatenating the multiple paragraphs of text to obtain a concatenated text; inputting the concatenated text into a language model, a prediction value of the number of articles being output via the language model; training the language model based on the actual number of articles in the plurality of articles and a prediction value of the number of articles, until a preset training completion condition is satisfied. In the present disclosure, the classification of the entire paragraph of text content by the language model may be implemented and the effect of recognizing the text content by the language model may be enhanced by training the language model using texts sampled from the plurality of articles.
机译:本公开提供了一种用于培训语言模型的方法,以及相关的装置,电子设备和可读存储介质,其涉及深度学习和自然语言处理技术领域的技术领域。具体实施解决方案如下:将来自多个文章中的每篇文章中的文本的段落进行采样,以获得多段的文本;连接文本的多个段落以获取连续文本;将连接文本输入到语言模型中,通过语言模型输出的文章数量的预测值;基于多个文章中的实际物品的实际数量和物品数量的预测值训练语言模型,直到满足预设训练完成条件。在本公开中,可以实现语言模型的整个文本内容的整个段落的分类,并且可以通过使用从多个物品采样的文本训练语言模型来增强语言模型的识别文本内容的效果。

著录项

  • 公开/公告号US2021397791A1

    专利类型

  • 公开/公告日2021-12-23

    原文格式PDF

  • 申请/专利号US202117203680

  • 发明设计人 DANXIANG ZHU;

    申请日2021-03-16

  • 分类号G06F40/289;G06K9/62;G06F40/205;G06N5/02;

  • 国家 US

  • 入库时间 2022-08-24 22:59:21

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号