首页> 外文会议>Pacific-Asia Conference on Knowledge Discovery and Data Mining >A Finetuned Language Model for Recommending cQA-QAs for Enriching Textbooks
【24h】

A Finetuned Language Model for Recommending cQA-QAs for Enriching Textbooks

机译:用于建议CQA-QAS的Fineetuned语言模型,用于丰富教科书

获取原文

摘要

Textbooks play a vital role in any educational system, despite their clarity and information, students tend to use community question answers (cQA) forums to acquire more knowledge. Due to the high data volume, the quality of Question-Answers (QA) of cQA forums can differ greatly, so it takes additional effort to go through all possible QA pairs for a better insight. This paper proposes an "sentence-level text enrichment system" where the fine-tuned BERT (Bidirectional Encoder Representations from Transformers) summarizer understands the given text, picks out the important sentence, and then rearranged them to give the overall summary of the text document. For each important sentence, we recommend the relevant QA pairs from cQA to make the learning more effective. In this work, we fine-tuned the pre-trained BERT model to extract the relevant QA sets that are most relevant for enriching important sentences of the textbook. We notice that fine-tuning the BERT model significantly improves the performance for QA selection and find that it outperforms existing RNN-based models for such tasks. We also investigate the effectiveness of our fine-tuned BERT Large model on three cQA datasets for the QA selection task and observed a maximum improvement of 19.72% compared to the previous models. Experiments have been carried out on NCERT (Grade Ⅸ and Ⅹ) Textbooks from India and "Pattern Recognition and Machine Learning" Textbook. The extensive evaluation methods demonstrate that the proposed model offers more precise and relevant recommendations in comparison to the state-of-the-art methods.
机译:教科书在任何教育系统中发挥着重要作用,尽管他们明确和信息,学生倾向于使用社区问题答案(CQA)论坛来获得更多知识。由于数据量高,CQA论坛的问题答题质量可能很大,因此需要额外的努力来完成所有可能的QA对进行更好的洞察力。本文提出了一个“句子级文本浓缩系统”,其中细调的BERT(变形金刚的双向编码器表示)摘要理解给定文本,选出重要句子,然后重新排列文本文档的整体摘要。对于每个重要句子,我们建议CQA的相关QA对使学习更有效。在这项工作中,我们微调了预先训练的BERT模型,以提取与丰富教科书的重要句子最相关的相关QA集。我们注意到精细调整BERT模型显着提高了QA选择的性能,并发现它优于现有的基于RNN的模型进行此类任务。我们还调查了对QA选择任务的三个CQA数据集的微调BERT大型模型的有效性,与之前的模型相比,观察到的最大提高了19.72%。实验已经在印度和“模式识别和机器学习”教科书的NCERT(级ⅸ和ⅹ)教科书上进行了实验。广泛的评估方法表明,与最先进的方法相比,拟议的模型提供了更精确和相关的建议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号