首页> 美国卫生研究院文献>Springer Open Choice >A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory
【2h】

A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory

机译:验证Amazon Mechanical Turk是否可以收集语言理论中的可接受性判断

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Amazon’s Mechanical Turk (AMT) is a Web application that provides instant access to thousands of potential participants for survey-based psychology experiments, such as the acceptability judgment task used extensively in syntactic theory. Because AMT is a Web-based system, syntacticians may worry that the move out of the experimenter-controlled environment of the laboratory and onto the user-controlled environment of AMT could adversely affect the quality of the judgment data collected. This article reports a quantitative comparison of two identical acceptability judgment experiments, each with 176 participants (352 total): one conducted in the laboratory, and one conducted on AMT. Crucial indicators of data quality—such as participant rejection rates, statistical power, and the shape of the distributions of the judgments for each sentence type—are compared between the two samples. The results suggest that aside from slightly higher participant rejection rates, AMT data are almost indistinguishable from laboratory data.
机译:亚马逊的Mechanical Turk(AMT)是一个Web应用程序,可为成千上万的潜在参与者提供即时访问,以进行基于调查的心理学实验,例如语法理论中广泛使用的可接受性判断任务。由于AMT是基于Web的系统,因此语法师可能会担心,移出实验室的实验人员控制环境并移至AMT用户控制环境可能会对收集到的判断数据质量产生不利影响。本文报告了两个相同的可接受性判断实验的定量比较,每个实验有176名参与者(总计352名):一项在实验室中进行,另一项在AMT上进行。在两个样本之间比较了数据质量的关键指标,例如参与者拒绝率,统计能力以及每种句子类型的判断分布的形状。结果表明,除了参与者拒绝率略高之外,AMT数据与实验室数据几乎没有区别。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号