首页> 外文期刊>AI and Ethics >Ought we align the values of artificial moral agents?
【24h】

Ought we align the values of artificial moral agents?

机译:我们是否应该调整人工道德主体的价值观?

获取原文
获取原文并翻译 | 示例

摘要

In the near future, the capabilities of commonly used artificial systems will reach a level where we will be able to permit them to make moral decisions autonomously as part of their proper daily functioning—autonomous cars, personal assistants, household robots, stock trading bots, autonomous weapons, etc. are examples of the types of systems that will deal with simple to complex moral situations that require some level of moral judgment. In the research field of machine ethics, we distinguish several types of artificial moral agents, each of which has a different level of moral agency. In this paper, we focus on the moral agency of Explicit and Full-blown artificial moral agents. We form an opinion regarding their level of moral agency, and then examine the question of whether it is morally right to align the values of (artificial) moral agents. If we assume or are able to determine that certain types of artificial agents are indeed moral agents, then we ought to examine whether it is morally right to construct them in such a way that they are “committed” to human values. We discuss an analogy to human moral agents and the implications of granting or denying moral agency from artificial agents.
机译:在不久的将来,常用人工系统的能力将达到一个水平,我们将能够允许它们自主地做出道德决定,作为其正常日常运作的一部分——自动驾驶汽车、个人助理、家用机器人、股票交易机器人、自主武器等是处理从简单到复杂的道德情况的系统类型的例子,这些情况需要一定程度的道德判断。在机器伦理学的研究领域,我们区分了几种类型的人工道德代理,每一种都有不同级别的道德代理。在本文中,我们专注于 Explicit 和 Full-成熟的人工道德代理的道德代理。我们对他们的道德能动性水平形成一个看法,然后研究对齐(人工的)道德主体的价值观在道德上是否正确的问题。如果我们假设或能够确定某些类型的人工能动者确实是道德能动者,那么我们应该检查以它们“致力于”人类价值观的方式构建它们在道德上是否正确。我们讨论了与人类道德代理的类比,以及授予或否认人工代理的道德代理的含义。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号