首页> 外文会议>IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining >Modeling Human Annotation Errors to Design Bias-Aware Systems for Social Stream Processing
【24h】

Modeling Human Annotation Errors to Design Bias-Aware Systems for Social Stream Processing

机译:对人类注释错误建模以设计用于社交流处理的偏差感知系统

获取原文

摘要

High-quality human annotations are necessary to create effective machine learning systems for social media. Low-quality human annotations indirectly contribute to the creation of inaccurate or biased learning systems. We show that human annotation quality is dependent on the ordering of instances shown to annotators (referred as ‘annotation schedule’), and can be improved by local changes in the instance ordering provided to the annotators, yielding a more accurate annotation of the data stream for efficient real-time social media analytics. We propose an error-mitigating active learning algorithm that is robust with respect to some cases of human errors when deciding an annotation schedule. We validate the human error model and evaluate the proposed algorithm against strong baselines by experimenting on classification tasks of relevant social media posts during crises. According to these experiments, considering the order in which data instances are presented to human annotators leads to both an increase in accuracy for machine learning and awareness toward some potential biases in human learning that may affect the automated classifier.
机译:要为社交媒体创建有效的机器学习系统,高质量的人类注释必不可少。低质量的人类注释间接地导致了不准确或有偏见的学习系统的创建。我们显示人工注释的质量取决于显示给注释器的实例的顺序(称为“注释计划”),并且可以通过提供给注释器的实例顺序的局部更改来提高,从而产生更准确的数据流注释用于高效的实时社交媒体分析。我们提出了一种缓解错误的主动学习算法,该算法在确定注释计划时针对某些人为错误情况具有较强的鲁棒性。我们通过在危机期间对相关社交媒体帖子的分类任务进行实验,验证了人为错误模型并针对强基准评估了所提出的算法。根据这些实验,考虑将数据实例提供给人工注释者的顺序既可以提高机器学习的准确性,又可以提高对可能影响自动分类器的人类学习中某些潜在偏见的意识。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号