首页> 外文期刊>AI & society >Virtuous vs. utilitarian artificial moral agents
【24h】

Virtuous vs. utilitarian artificial moral agents

机译:道德与功利的人造道德主体

获取原文
获取原文并翻译 | 示例
           

摘要

Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and loan Muntean (2016, 2017), for an artificial moral agent based on virtue theory. While the virtuous artificial moral agent has various strengths, this paper argues that a rule-based utilitarian approach (in contrast to a strict act utilitarian approach) is superior, because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits. Specifically, a two-level utilitarian artificial moral agent incorporating both established moral rules and a utility calculator is especially well suited for machine ethics.
机译:鉴于人为的道德主体(例如自动驾驶汽车,致命的自动武器和自动交易系统)现在已成为社会道德方程式的一部分,我们应该在道德上评估其行为。人为的道德主体应该如何做出决定?一种道德理论是否比其他道德理论更适合机器伦理?在简要概述了将道德构建到机器中的主流道德方法后,本文讨论了唐·霍华德(Don Howard)和借贷Muntean(2016,2017)针对基于美德理论的人为道德代理提出的最新建议。尽管道德的人造道德主体具有各种优势,但本文认为,基于规则的功利主义方法(与严格的行为功利主义方法相反)是优越的,因为它可以捕捉美德理论方法的最重要特征,同时实现额外的价值。重大利益。特别地,结合了既定的道德规则和效用计算器的两级功利主义人造道德主体特别适合于机器伦理。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号