首页> 外文期刊>Language Assessment Quarterly >Designing and Scaling Level-Specific Writing Tasks in Alignment With the CEFR: A Test-Centered Approach
【24h】

Designing and Scaling Level-Specific Writing Tasks in Alignment With the CEFR: A Test-Centered Approach

机译:根据CEFR设计和扩展针对特定级别的写作任务:以测试为中心的方法

获取原文
获取原文并翻译 | 示例
       

摘要

The Common European Framework of Reference (CEFR; Council of Europe, 200116. Council of Europe. (2001). Common European reference framework for languages. Strasbourg, France: Author. http://www.coe.int/T/DG4/Portfolio/?L=E&M=/documents_intro/common_framework.html (http://www.coe.int/T/DG4/Portfolio/?L=E&M=/documents_intro/common_framework.html) View all references) provides a competency model that is increasingly used as a point of reference to compare language examinations. Nevertheless, aligning examinations to the CEFR proficiency levels remains a challenge. In this article, we propose a new, level-centered approach to designing and aligning writing tasks in line with the CEFR levels. Much work has been done on assessing writing via tasks spanning over several levels of proficiency but little research on a level-specific approach, where one task targets one specific proficiency level. In our study, situated in a large-scale assessment project where such a level-specific approach was employed, we investigate the influence of the design factors tasks, assessment criteria, raters, and student proficiency on the variability of ratings, using descriptive statistics, generalizability theory, and multifaceted Rasch modeling. Results show that the level-specific approach yields plausible inferences about task difficulty, rater harshness, rating criteria difficulty, and student distribution. Moreover, Rasch analyses show a high level of consistency between a priori task classifications in terms of CEFR levels and empirical task difficulty estimates. This allows for a test-centered approach to standard setting by suggesting empirically grounded cut-scores in line with the CEFR proficiency levels targeted by the tasks.View full textDownload full textRelated var addthis_config = { ui_cobrand: "Taylor & Francis Online", services_compact: "citeulike,netvibes,twitter,technorati,delicious,linkedin,facebook,stumbleupon,digg,google,more", pubid: "ra-4dff56cd6bb1830b" }; Add to shortlist Link Permalink http://dx.doi.org/10.1080/15434303.2010.535575
机译:通用欧洲参考框架(CEFR;欧洲委员会,200116。欧洲委员会(2001)。欧洲通用语言参考框架。法国斯特拉斯堡:作者。http://www.coe.int/T/DG4/ Portfolio /?L = E&M = / documents_intro / common_framework.html(http://www.coe.int/T/DG4/Portfolio/?L=E&M=/documents_intro/common_framework.html)查看所有参考资料)提供了能力模型越来越多地用作比较语言考试的参考。尽管如此,使考试与CEFR能力水平保持一致仍然是一个挑战。在本文中,我们提出了一种新的,以等级为中心的方法来设计和调整符合CEFR等级的写作任务。通过跨多个熟练程度的任务来评估写作的工作很多,但针对特定水平方法的研究却很少,在该方法中,一项任务针对一个特定熟练水平。在我们的研究中,该研究位于采用这种特定级别方法的大规模评估项目中,我们使用描述性统计资料调查设计因素任务,评估标准,评分者和学生熟练程度对评分变异性的影响,概化理论和多方面的Rasch建模。结果表明,针对特定级别的方法可以得出有关任务难度,评分者的苛刻性,评分标准难度和学生分布的合理推断。此外,Rasch分析显示,就CEFR水平和经验性任务难度估算而言,先验任务分类之间的一致性很高。这可以通过建议以经验为基础的切割分数来建议任务所针对的CEFR能力水平,从而以测试为中心的方法进行标准设置。查看全文下载全文相关的var addthis_config = {ui_cobrand:“泰勒和弗朗西斯在线”,servicescompact: “ citeulike,netvibes,twitter,technorati,美味,linkedin,facebook,stumbleupon,digg,google,更多”,发布:“ ra-4dff56cd6bb1830b”};添加到候选列表链接永久链接http://dx.doi.org/10.1080/15434303.2010.535575

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号