首页> 外文会议>Workshop on Innovative use of NLP for Building Educational Applications >Should You Fine-Tune BERT for Automated Essay Scoring?
【24h】

Should You Fine-Tune BERT for Automated Essay Scoring?

机译:您是否应该为自动化的论文评分进行微调伯特?

获取原文

摘要

Most natural language processing research now recommends large Transformer-based models with fine-tuning for supervised classification tasks; older strategies like bag-of-words features and linear models have fallen out of favor. Here we investigate whether, in automated essay scoring (AES) research, deep neural models are an appropriate technological choice. We find that fine-tuning BERT produces similar performance to classical models at significant additional cost. We argue that while state-of-the-art strategies do match existing best results, they come with opportunity costs in computational resources. We conclude with a review of promising areas for research on student essays where the unique characteristics of Transformers may provide benefits over classical methods to justify the costs.
机译:大多数自然语言处理研究现在推荐大型变压器的模型,具有微调的监督分类任务;较旧的策略,如袋子的特征和线性模型已经失望。在这里,我们调查在自动论文评分(AES)研究中,深神经模型是适当的技术选择。我们发现微调杆以显着的额外成本对古典模型产生类似的性能。我们认为,虽然最先进的策略确实符合现有的最佳结果,但它们在计算资源中具有机会成本。我们得出审查对学生论文研究的有前途的区域,其中变形金刚的独特特征可以提供古典方法的效益,以证明成本。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号