首页> 外文会议>Canadian Conference on Artificial Intelligence >Sensitivity to Risk Profiles of Users When Developing AI Systems
【24h】

Sensitivity to Risk Profiles of Users When Developing AI Systems

机译:开发AI系统时对用户风险状况的敏感性

获取原文

摘要

The AI community today has renewed concern about the social implications of the models they design, imagining the impact of deployed systems. One thrust has been to reflect on issues of fairness and explainability before the design process begins. There is increasing awareness as well of the need to engender trust from users, examining the origins of mistrust as well as the value of multiagent trust modelling solutions. In this paper, we argue that social AI efforts to date often imagine a homogenous user base and those models which do support differing solutions for users with different profiles have not yet examined one important consideration upon which trusted AI may depend: the risk profile of the user. We suggest how user risk attitudes can be integrated into approaches that try to reason about such dilemmas as sacrificing optimality for the sake of explainability. In the end, we reveal that it is challenging to be satisfying the myriad needs of users in their desire to be more comfortable accepting AI solutions and conclude that tradeoffs need to be examined and balanced. We advocate reasoning about these tradeoffs concerning user models and risk profiles, as we design the decision making algorithms of our systems.
机译:当今的AI社区重新开始关注他们设计的模型的社会含义,并想象了已部署系统的影响。在设计过程开始之前,重点之一就是对公平性和可解释性问题进行反思。人们越来越认识到需要建立用户信任,检查不信任的根源以及多代理信任建模解决方案的价值。在本文中,我们认为,迄今为止,社交AI的努力通常会想象出一个统一的用户群,而那些为具有不同配置文件的用户支持不同解决方案的模型尚未研究可信AI可能依赖的一个重要考虑因素:用户。我们建议如何将用户风险态度整合到尝试为诸如可解释性而牺牲最优性之类的困境的方法中。最后,我们揭示了满足用户无数需求的愿望是他们想要更舒适地接受AI解决方案所面临的挑战,并得出结论,需要权衡和权衡这些结论。在设计系统的决策算法时,我们提倡对这些与用户模型和风险状况有关的折衷进行推理。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号