首页> 外文期刊>Big Data and Cognitive Computing >Deep Automation Bias: How to Tackle a Wicked Problem of AI?
【24h】

Deep Automation Bias: How to Tackle a Wicked Problem of AI?

机译:深度自动化偏见:如何解决AI的邪恶问题?

获取原文
           

摘要

The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and ethical issues with technical solutions that may cause additional, wicked problems. Alternative analytical approaches are thus needed to avoid this and to comprehend how societal and ethical issues occur in AI systems. Despite various forms of bias, ultimately, risks result from eventual rule conflicts between the AI system behavior due to feature complexity and user practices with limited options for scrutiny. Hence, although different forms of bias can occur, automation is their common ground. The paper highlights the role of automation and explains why deep automation bias (DAB) is a metarisk of AI. Based on former work it elaborates the main influencing factors and develops a heuristic model for assessing DAB-related risks in AI systems. This model aims at raising problem awareness and training on the sociotechnical risks resulting from AI-based automation and contributes to improving the general explicability of AI systems beyond technical issues.
机译:在不同的社会环境中越来越多地使用AI的加强了风险,道德问题和偏见的辩论。因此,有前途的研究活动主要集中在消除直流偏压,以加强在机器学习的公平性,问责性和透明度。还有就是,虽然,要解决的技术解决方案,可能会导致其他的,邪恶的问题,社会和伦理问题的倾向。因此,需要替代的分析方法,以避免这一点,并理解如何发生的AI系统的社会和伦理问题。尽管有种种偏见,最终,从风险由于功能的复杂性,并审议有限的选择用户行为的AI系统行为之间的最终规则冲突的结果。因此,尽管不同形式的偏见,可能会发生,自动化是他们的共同点。文件强调了自动化的作用,并解释了为什么深自动化偏差(DAB)是AI的metarisk。基于以前的工作,它阐述的主要影响因素和发展评估中的AI系统DAB相关风险启发式模型。该模型旨在提高问题意识和基于人工智能的自动化导致的社会技术风险,并有助于提高超出技术问题的AI系统一般可释性训练。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号