首页> 外文会议>International Joint Conference on Artificial Intelligence >From Automation to Autonomous Systems: A Legal Phenomenology with Problems of Accountability
【24h】

From Automation to Autonomous Systems: A Legal Phenomenology with Problems of Accountability

机译:从自动化到自主系统:征求问责问题的法律现象学

获取原文

摘要

Over the past decades a considerable amount of work has been devoted to the notion of autonomy and the intelligence of robots and of AI systems: depending on the application, several standards on the "levels of automation" have been proposed. Although current AI systems may have the intelligence of a fridge, or of a toaster, some of such autonomous systems have already challenged basic pillars of society and the law, e.g. whether lethal force should ever be permitted to be "fully automated." The aim of this paper is to show that the normative challenges of AI entail different types of accountability that go hand-in-hand with choices of technological dependence, delegation of cognitive tasks, and trust. The stronger the social cohesion is, the higher the risks that can be socially accepted through the normative assessment of the not fully predictable consequences of tasks and decisions entrusted to AI systems and artificial agents.
机译:在过去的几十年中,一项大量的工作已经致力于自主权和机器人的智慧和AI系统的概念:根据申请,已经提出了“自动化水平”标准。尽管目前的AI系统可能具有冰箱的智慧,或者烤面包机,但是一些这样的自主系统已经挑战了社会的基本支柱和法律,例如,是否应该允许致命的力量“全自动”。本文的目的是表明,AI的规范性挑战需要不同类型的问责制,与技术依赖,认知任务代表团和信任的代表团一起参与。社会凝聚力较强,通过规范评估对AI系统和人工代理人的不完全可预测的后果的规范性评估,可以社会接受的风险越高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号