...
首页> 外文期刊>Technological forecasting and social change >How should the results of artificial intelligence be explained to users?- Research on consumer preferences in user-centered explainable artificial intelligence
【24h】

How should the results of artificial intelligence be explained to users?- Research on consumer preferences in user-centered explainable artificial intelligence

机译:How should the results of artificial intelligence be explained to users?- Research on consumer preferences in user-centered explainable artificial intelligence

获取原文
获取原文并翻译 | 示例

摘要

Artificial intelligence (AI) has become part of our everyday lives, and its presence and influence are expected to grow exponentially. Regardless of its expanding impact, the perplexing algorithms and processes that drive AI's decision and output can lead to decreased trust, and thus impede the adoption of future AI services. Explainable AI (XAI) in recommender systems has surfaced as a solution that can help users understand how and why an AI recommended a specific product or service. However, there is no standardized explanation method that satisfies users' preferences and needs. Therefore, the main objective of this study is to explore a unified explanation method that centers around human perspective. This study examines the preference for AI interfaces by inves-tigating the components of user-centered explainability, including scope (global and local) and format (text and visualization). A mixed logit model is used to analyze data collected by a conjoint survey. Results show that local explanation and visualization are preferred, and users dislike lengthy textual interfaces. Our findings incorporate the extraction of monetary value from each attribute.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号