【24h】

RESPONSIBLE AI

机译:responsible AI

获取原文
获取原文并翻译 | 示例
           

摘要

Since the Cambridge Analytica case broke, the discussion on how to use Artificial Intelligence (AI) ethically has intensified. Whilst beforehand, discussion was centred on fears of automation and possible loss of jobs, the discussion now is about the uncontrolled use of personal data and the consequences of bias.We know our online shopping and browsing histories are used to recommend to us new products. The fact, however, that data about Facebook Likes paired with a personality test can lead to predictive models that can be used to micro-target voters has been a shock to most. But there are ethical concerns in other areas too.Machine learning uses historical data. The resulting models that are used to make predictions on new data can only replicate the statistical distributions contained in that historical data. Bias in the original data will make its way into the model. Research has shown that today's facial recognition systems can be racially biased suggesting that these systems have been trained with data that is race and gender biased.
机译:由于剑桥分析案例破裂,关于如何使用人工智能(AI)的讨论。事先讨论,讨论是担心自动化和可能损失的担忧,现在的讨论是关于个人数据的不受控制的使用以及偏见的后果。我们知道我们的在线购物和浏览历史用于推荐给我们新产品。然而,事实上,关于Facebook喜欢与个性测试配对的数据可以导致可用于微目标选民的预测模型对大多数人感到震惊。但其他领域还有道德问题.Machine学习使用历史数据。用于对新数据进行预测的结果模型只能复制该历史数据中包含的统计分布。原始数据中的偏差将进入模型。研究表明,今天的面部识别系统可以是种族偏见的,表明这些系统已经接受过种族和性别偏见的数据训练。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号