首页> 外文会议>Robo-Philosophy Conference >Ethical Issues Concerning Lethal Autonomous Robots in Warfare1
【24h】

Ethical Issues Concerning Lethal Autonomous Robots in Warfare1

机译:关于Warfare1的致命自治机器人的道德问题

获取原文

摘要

The massively introduction of advanced military technologies makes it important to address ethical issues related to the potential use of lethal autonomous robot systems (LARS) in warfare. Hence, this article sets out to: 1. explore human robot interaction in a military context. Philosophically speaking, artificial agents without inner states can be seen as an obstacle for formation of relations between humans and robots; but from a psychological perspective, soldiers bond with technologies and may even in some situations have good reasons for preferring robots over humans. Nevertheless, one may question whether this observation lent support to the idea of introducing LARS. 2. establish a Moral Military Turing Test (MMTT) as a springboard for a discussion of programming approaches to machine morality. Here, a hybrid model in the shape of a mix between a top-down theoretically driven implementation of a moral framework and a bottom-up adaptive architecture represents a promising approach all though one may doubt whether phronesis is at all computationally tractable. 3. discuss whether one can assign moral standing to machines. In complex technologically mediated contexts, relations of responsibilities are hard to capture with reference to Kantian autonomy as a prerequisite for moral agency. Moving beyond the warfare context, in some contexts it seems worthwhile to allow for moral responsibility to be distributed between human and artificial agent. But, this solution has little to offer in the warfare domain, since here one has to be able to hold individuals responsible in order to acknowledge Wurde to the victims of a war.
机译:大规模引进先进的军事技术使得解决与战争中致命自治机器人系统(LARS)有关的道德问题。因此,本文规定了:1。探索军事背景下的人体机器人互动。哲学上讲,没有内部状态的人工代理可以被视为形成人与机器人之间关系的障碍;但是,从心理角度来看,士兵与技术债券债券,甚至可能在某些情况下都有偏好机器人对人类的良好理由。然而,人们可能会质疑这种观察是否支持介绍Lars的想法。 2.建立道德军事图灵测试(MMTT)作为跳板,用于讨论机器道德的编程方法。这里,在道德框架的自上而下理论上的实现之间的混合形式的混合模型代表了所有有希望的方法,尽管虽然可能怀疑鲍尔斯是否在计算易行中。 3.讨论一个人是否可以将道德站在机器上。在复杂的技术介导的背景下,责任关系是难以参考Kantian Autonomy作为道德机构的先决条件捕获。超越战争背景,在某些情况下,它似乎有价值,以便在人类和人工剂之间分配道德责任。但是,这种解决方案在战争域中很少提供,因为这里必须能够持有负责人,以便承认WURDE对战争的受害者。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号