【24h】

Mechanizing modal psychology

机译:机械化模态心理学

获取原文

摘要

Machines are becoming more capable of substantively interacting with human beings as part of simple dyads and within the confines of our complex social structures. Thought must be given to how their behavior might be regulated with respect to the norms and conventions by which we live. This is certainly true for the military domain [1], but is no less true for eldercare, health care, disaster relief and law enforcement; all areas where robotic systems are poised to make tremendous impact in the near future. But how should we inculcate sensitivity to normative considerations in the next generation of intelligent systems? I argue here for an approach to building moral machines grounded in cognitive architectural considerations, and specifically in the dynamics of how alternatives are represented and reasoned over. After examining some recent results in the empirical literature on human moral judgment, I suggest some desiderata for knowledge representation and reasoning tools that may offer the means to capture some of the foundations of human moral cognition.
机译:机器越来越能够与人类实质性地互动,作为简单二元的一部分,并且在我们复杂的社会结构的范围内。必须介绍他们的行为如何在我们生活的规范和公约方面受到监管。这对军事领域来说肯定是真的[1],但对于老人,医疗保健,救灾和执法并不少;机器人系统所准备的所有领域在不久的将来产生巨大影响。但我们应该如何灌输对下一代智能系统的规范考虑的敏感性?我争辩于建立在认知架构考虑因素的道德机器的方法,特别是在替代方案所代表和推理的动态中。在审查了对人道道德判断的实证文献中的一些近期成果之后,我建议了一些关于知识代表和推理工具的Desiderata,这些工具可能提供捕获人类道德认知的一些基础的手段。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号