首页> 美国卫生研究院文献>Springer Open Choice >Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?
【2h】

Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?

机译:基于大数据机器学习的算法决策:透明度能否恢复问责制?

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante.
机译:由机器学习开发的算法辅助的决策越来越决定着我们的生活。不幸的是,关于过程的完全不透明是常态。透明度是否有助于恢复经常维护的此类系统的责任制?研究了对完全透明性的一些反对意见:数据集公开后隐私的丧失,算法本身的披露(特别是“游戏系统”)的有害影响,公司竞争优势的潜在丧失以及有限的收益由于复杂的算法通常固有地是不透明的,因此在预期的可回答性方面是不理想的。结论是,至少目前,仅对监督机构完全透明是唯一可行的选择;通常不建议将其扩展到公众。而且,有人认为算法决策最好应该变得更容易理解。为此,应采用事后解释或事前设计可解释的机器学习模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号