【24h】

Can Lethal Autonomous Robots Learn Ethics?

机译:致命的自治机器人可以学习道德吗?

获取原文

摘要

When lethal autonomous robots (LARs) are used in warfare, the issue of how to ensure they behave ethically given military necessity, rules of engagement and laws of war raises important questions. This paper describes a novel approach in which LARs acquire their own knowledge of ethics through generating data for a wide variety of simulated battlefield situations. Unsupervised learning techniques are used by the LAR to find naturally occurring clusters equating approximately to ethically justified and ethically unjustified lethal engagement. These cluster labels can then be used to learn moral rules for determining whether its autonomous actions are ethical in specific battlefield contexts. One major advantage of this approach is that it reduces the probability of the LAR picking up human biases and prejudices. Another advantage is that an LAR learning its own ethical code is more consistent with the idea of an intelligent autonomous agent.
机译:当致命的自治机器人(LARS)在战争中使用时,如何确保他们行为道德的问题,参与规则和战争法则提出了重要的问题。 本文介绍了一种新的方法,其中LARS通过为各种模拟的战地情况产生数据来获取他们对道德的知识。 LAR使用无监督的学习技术来查找自然发生的簇等同于惯常的道德合理和道德不合解的致命啮合。 然后,这些集群标签可以用于学习用于确定其自主行动是否在特定战场上下文中的道德规范的道德规则。 这种方法的一个主要优点是它降低了LAR拾取人类偏见和偏见的可能性。 另一个优点是,与智能自治代理的想法更加一致,学习其自身的道德代码。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号