首页> 外文期刊>Philosophy & technology >Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?
【24h】

Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?

机译:基于大数据机器学习的算法决策:透明度能否恢复问责制?

获取原文
获取原文并翻译 | 示例
           

摘要

Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves ("gaming the system" in particular), the potential loss of companies' competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante.
机译:由机器学习开发的算法辅助的决策越来越决定着我们的生活。不幸的是,关于过程的完全不透明是常态。透明度是否有助于恢复经常维护的此类系统的责任制?研究了对完全透明性的几个反对意见:数据集公开后隐私的丧失,算法本身的披露(尤其是“赌博系统”)的有害影响,公司竞争优势的潜在丧失以及由于复杂的算法通常固有地是不透明的,因此预期的可回答性有限。结论是,至少目前,仅对监督机构完全透明是唯一可行的选择;通常不建议将其扩展到公众。而且,有人认为算法决策最好应该变得更容易理解。为此,应采用事后解释或事前设计可解释的机器学习模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号