首页> 外文会议>IEEE International Symposium on Technology and Society >AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks
【24h】

AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks

机译:AI的公共利益发展:从抽象陷阱到社会科技风险

获取原文

摘要

Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-Inthe-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedogogy in AI.
机译:尽管兴趣在本科课程中沟通道德问题和社会背景,以推进公共利益技术(PIT)目标,但研究生水平的干预措施仍然很大程度上是未开发的。这可能是由于不同的人工智能(AI)研究轨道与社会环境的界面的冲突方式。在本文中,我们追踪了AI研究三个不同子场的社会科技探究的历史出现:AI安全,公平机器学习(Fair ML)和人类 - 井路(HIL)自主权。我们表明,对于每个子场,对坑源的看法来自过去在规范社会秩序中的技术系统的过去集成所面临的特殊危险。我们进一步询问这些历史如何使每个子场的响应决定到概念陷阱,如科学和技术研究文献所定义。最后,通过对这些目前友野的比较分析,我们为AI中的Sociatiotechnical毕业生Pedogogy提供了一种统一方法的路线图。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号