首页> 外文期刊>Artificial Intelligence and Law >Norms and value based reasoning: justifying compliance and violation
【24h】

Norms and value based reasoning: justifying compliance and violation

机译:规范和基于价值的推理:证明合规与违规

获取原文
获取原文并翻译 | 示例
           

摘要

There is an increasing need for norms to be embedded in technology as the widespread deployment of applications such as autonomous driving, warfare and big data analysis for crime fighting and counter-terrorism becomes ever closer. Current approaches to norms in multi-agent systems tend either to simply make prohibited actions unavailable, or to provide a set of rules (principles) which the agent is obliged to follow, either as part of its design or to avoid sanctions and punishments. In this paper we argue for the position that agents should be equipped with the ability to reason about a system's norms, by reasoning about the social and moral values that norms are designed to serve; that is, perform the sort of moral reasoning we expect of humans. In particular we highlight the need for such reasoning when circumstances are such that the rules should arguably be broken, so that the reasoning can guide agents in deciding whether to comply with the norms and, if violation is desirable, how best to violate them. One approach to enabling this is to make use of an argumentation scheme based on values and designed for practical reasoning: arguments for and against actions are generated using this scheme and agents choose between actions based on their preferences over these values. Moral reasoning then requires that agents have an acceptable set of values and an acceptable ordering on their values. We first discuss how this approach can be used to think about and justify norms in general, and then discuss how this reasoning can be used to think about when norms should be violated, and the form this violation should take. We illustrate how value based reasoning can be used to decide when and how to violate a norm using a road traffic example. We also briefly consider what makes an ordering on values acceptable, and how such an ordering might be determined.
机译:随着越来越广泛地应用诸如自动驾驶,战争和大数据分析等用于犯罪和反恐的应用程序,越来越需要将规范嵌入技术中。多主体系统中当前的规范方法趋向于要么简单地使被禁止的行为不可用,要么提供一套代理必须遵循的规则(原则),以作为其设计的一部分或避免制裁和惩罚。在本文中,我们主张以下立场:代理人应具备通过设计规范所服务的社会和道德价值观来推理系统规范的能力。也就是说,执行我们期望人类进行的道德推理。特别是,我们着重强调了在这样的情况下需要推理的必要性:可以合理地打破规则,以便推理可以指导代理人决定是否遵守准则,如果需要违反准则,则如何最好地违反准则。实现此目的的一种方法是利用基于值的辩论方案,并针对实际推理进行设计:使用该方案生成支持和反对行为的论据,并且代理根据他们对这些值的偏好在行动之间进行选择。因此,道德推理要求代理人具有可接受的一组值和对其值的可接受的排序。我们首先讨论如何使用这种方法来一般性地思考和证明规范,然后讨论如何使用这种推理来思考何时应该违反规范,以及这种违反应采取的形式。我们通过道路交通示例说明如何使用基于价值的推理来决定何时以及如何违反规范。我们还简要考虑了什么使可接受的值排序,以及如何确定这种排序。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号