首页> 美国卫生研究院文献>other >Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts
【2h】

Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts

机译:比较专家和非专家提供的众包数据的质量

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a number of concerns over the subsequent use of the data, in particular over data quality. This paper examined crowdsourced data from the Geo-Wiki crowdsourcing tool for land cover validation to determine whether there were significant differences in quality between the answers provided by experts and non-experts in the domain of remote sensing and therefore the extent to which crowdsourced data describing human impact and land cover can be used in further scientific research. The results showed that there was little difference between experts and non-experts in identifying human impact although results varied by land cover while experts were better than non-experts in identifying the land cover type. This suggests the need to create training materials with more examples in those areas where difficulties in identification were encountered, and to offer some method for contributors to reflect on the information they contribute, perhaps by feeding back the evaluations of their contributed data or by making additional training materials available. Accuracies were also found to be higher when the volunteers were more consistent in their responses at a given location and when they indicated higher confidence, which suggests that these additional pieces of information could be used in the development of robust measures of quality in the future.
机译:当前缺少用于遥感产品的校准和验证以及模型的开发和验证的现场环境数据。众包越来越被视为增加就地数据供应的一种潜在强大方式,但是对于数据的后续使用,尤其是数据质量,存在许多担忧。本文研究了来自Geo-Wiki众包工具的众包数据,以进行土地覆被验证,以确定专家和非专家在遥感领域提供的答案之间在质量上是否存在显着差异,从而确定众包数据描述的程度人为影响和土地覆盖可用于进一步的科学研究。结果表明,专家和非专家之间在识别人类影响方面差异不大,尽管结果因土地覆盖而异,而专家在识别土地覆盖类型方面要优于非专家。这表明有必要在遇到识别困难的领域中创建带有更多示例的培训材料,并为贡献者提供一些方法来反思他们贡献的信息,也许是通过反馈他们贡献的数据的评估或通过其他方式做出的贡献。提供培训材料。当志愿者在给定位置的反应更加一致且他们表示更高的置信度时,也发现准确性更高,这表明这些额外的信息可用于将来开发强有力的质量度量标准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号