【24h】

A Report on the Automatic Evaluation of Scientific Writing Shared Task

机译:关于科学写作共享任务自动评估的报告

获取原文

摘要

The Automated Evaluation of Scientific Writing, or AESW, is the task of identifying sentences in need of correction to ensure their appropriateness in a scientific prose. The data set comes from a professional editing company, VTeX, with two aligned versions of the same text - before and after editing - and covers a variety of textual infelicities that proofreaders have edited. While previous shared tasks focused solely on grammatical errors (Dale and Kilgarriff, 2011; Dale et al., 2012; Ng et al., 2013; Ng et al., 2014), this time edits cover other types of linguistic misfits as well, including those that almost certainly could be interpreted as style issues and similar "matters of opinion". The latter arise because of different language editing traditions, experience, and the absence of uniform agreement on what "good" scientific language should look like. Initiating this task, we expected the participating teams to help identify the characteristics of "good" scientific language, and help create a consensus of which language improvements are acceptable (or necessary). Six participating teams took on the challenge.
机译:科学写作的自动评估或AESW是识别需要纠正的句子的任务,以确保其在科学散文中的适当性。数据集来自专业编辑公司,VTEX,同一文本的两个对齐版本 - 编辑之前和之后 - 涵盖了校对者编辑的各种文本信息。虽然以前的共享任务完全集中在语法错误(Dale和Kilgarriff,2011; Dale等,2012; Ng等,2014),这次编辑涵盖了其他类型的语言不适,包括那些几乎肯定可以被解释为风格问题和类似“意见事项”的人。由于不同的语言编辑传统,经验和缺乏统一协议,后者出现了后者对“好”的科学语言应该看起来像。启动这项任务,我们预计参与团队有助于确定“良好”科学语言的特征,并帮助创建一个语言改善的共识(或必要)。六支参与的团队接受了挑战。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号