As autonomous systems are becoming more and more pervasive, they often have to make decisions concerning moral and ethical values. There are many approaches to incorporating moral values in autonomous decision-making that are based on some sort of logical deduction. However, we argue here, in order for decision-making to seem persuasive to humans, it needs to reflect human values and judgments. Employing some insights from our ongoing research using features of the blackboard architecture for a context-aware recommender system, and a legal decision-making system that incorporates supra-legal aspects, we aim to exploreif this architecture can also be adapted to implement a moral decision-making systemudthat generates rationales that are persuasive to humans. Our vision is that such a system can be used as an advisory system to consider a situation from different moral pers pectives, and generate ethical pros and cons of taking a particular course of action in a given context.
展开▼