首页> 外文期刊>Computer law & security report >An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems
【24h】

An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems

机译:在AI数据密集型系统的发展中的人权影响评估(HRIA)的基于证据的方法

获取原文
获取原文并翻译 | 示例
           

摘要

Different approaches have been adopted in addressing the challenges of Artificial Intelli-gence (AI), some centred on personal data and others on ethics, respectively narrowing and broadening the scope of AI regulation. This contribution aims to demonstrate that a third way is possible, starting from the acknowledgement of the role that human rights can play in regulating the impact of data-intensive systems. The focus on human rights is neither a paradigm shift nor a mere theoretical exercise. Through the analysis of more than 700 decisions and documents of the data protection authorities of six countries, we show that human rights already underpin the decisions in the field of data use. Based on empirical analysis of this evidence, this work presents a methodology and a model for a Human Rights Impact Assessment (HRIA). The methodology and related assessment model are focused on AI applications, whose nature and scale require a proper contextu-alisation of HRIA methodology. Moreover, the proposed models provide a more measurable approach to risk assessment which is consistent with the regulatory proposals centred on risk thresholds. The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness. The overall goal is to respond to the growing interest in HRIA, moving from a mere theoretical debate to a concrete and context-specific implementation in the field of data-intensive applications based on AI. (c) 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
机译:在解决人造智能契伐(AI)的挑战时,已经采用了不同的方法,其中一些以个人数据和伦理的其他人为中心,分别缩小和扩大AI规则的范围。这一贡献旨在证明第三种方式是可能的,从承认人权可以在规范数据密集型系统的影响方面发挥作用。对人权的关注既不是范式转移也不是仅仅是理论运动。通过分析六个国家的700多个决定和文件,我们展示人权已经支撑了数据使用领域的决定。根据对本证据的实证分析,本工作提出了人权影响评估(HRIA)的方法论和模型。方法和相关评估模型的重点是AI应用,其性质和规模需要适当的HRIA方法。此外,所提出的模型提供了更可测量的风险评估方法,这与以风险阈值为中心的监管建议一致。在具体的案例研究中测试了所提出的方法,以证明其可行性和有效性。总体目标是应对麦利亚的日益增长的兴趣,从仅仅是基于AI的数据密集型应用领域的具体和背景技术的理论辩论。 (c)2021作者。由elsevier有限公司出版。这是CC By-NC-ND许可下的开放式访问文章(http://creativecommons.org/licenses/by-nc-nd/4.0/)

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号