首页> 外文期刊>IEEE Spectrum >The opacity of artificial intelligence makes it hard to tell when decision-making is biased
【24h】

The opacity of artificial intelligence makes it hard to tell when decision-making is biased

机译:人工智能的不透明度使得何时偏见何时偏见

获取原文
获取原文并翻译 | 示例

摘要

If you're on Facebook, click on "Why am I seeing this ad?" The answer will look something like "[Advertiser] wants to reach people who may be similar to their customers" or "[Advertiser] is trying to reach people ages 18 and older" or "[Advertiser] is trying to reach people whose primary location is the United States." Oh, you'll also see "There could also be more factors not listed here." Such explanations started appearing on Facebook in response to complaints about the platform's ad-placing artificial-intelligence (AI) system. For many people, it was their first encounter with the growing trend of explainable AI, or XAI.
机译:如果您在Facebook上,请单击“为什么我看到此广告?”答案将看起来像“广告商”想要到达可能与客户相似的人“或”[广告商]试图到达18岁及以上的人“或”[广告商]正试图到达其主要位置的人是美国。“哦,你也会看到“这里也有更多的因素没有列出。”在Facebook上开始出现此类解释,以响应平台的广告放置人工智能(AI)系统的投诉。对于许多人来说,它是他们第一次遇到了令人易解释的AI或Xai的趋势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号