首页> 外文会议>Annual German conference on Artificial Intelligence >Safety Constraints and Ethical Principles in Collective Decision Making Systems
【24h】

Safety Constraints and Ethical Principles in Collective Decision Making Systems

机译:集体决策系统中的安全约束和道德原则

获取原文

摘要

The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.
机译:未来,无人驾驶机器将在与人类相同的环境中发挥作用,并在驾驶,辅助技术和医疗保健等各个领域发挥作用。想想无人驾驶汽车,伴侣机器人和医疗诊断支持系统。我们还认为,人与机器经常需要共同努力,并就共同的决定达成共识。因此,将非常需要混合的集体决策系统。在这种情况下,机器和集体决策系统都应遵循某种形式的道德价值观和道德原则(适合于他们将采取的行动,但始终与人类相适应)以及安全约束。实际上,人类会接受和信任更多在同一环境中行为与其他人类一样道德的机器。同样,这些原理将使机器更容易确定人类的行为并以人类可以理解的方式解释其行为。而且,通常机器和人类将需要通过共识或达成折衷来共同做出决策。共同的道德价值观和道德原则将有助于实现这一目标。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号