【24h】

Reasoning about Political Bias in Content Moderation

机译:关于内容适度的政治偏见的推理

获取原文

摘要

Content moderation, the AI-human hybrid process of removing (toxic) content from social media to promote community health, has attracted increasing attention from lawmakers due to allegations of political bias. Hitherto, this allegation has been made based on anecdotes rather than logical reasoning and empirical evidence, which motivates us to audit its validity. In this paper, we first introduce two formal criteria to measure bias (i.e., independence and separation) and their contextual meanings in content moderation, and then use YouTube as a lens to investigate if the political leaning of a video plays a role in the moderation decision for its associated comments. Our results show that when justifiable target variables (e.g., hate speech and extremeness) are controlled with propensity scoring, the likelihood of comment moderation is equal across left- and right-leaning videos.
机译:内容适度,从社会媒体中移除(有毒)内容以促进社区健康的AI-LEAR杂交过程,由于政治偏见指控,由于指导,引起了立法者的越来越关注。 迄今为止,这一指控是基于轶事而不是逻辑推理和经验证据,激励我们审计其有效性。 在本文中,我们首先介绍了两个正式标准来衡量偏差(即独立性和分离)以及它们在内容审核中的上下文含义,然后使用YouTube作为镜头来调查,如果视频的政治倾斜在适度中发挥作用 决定其相关评论。 我们的研究结果表明,当具有倾向评分的合理的目标变量(例如,仇恨言语和极端)时,评论审核的可能性在左右视频和右倾斜的视频中相同。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号