This paper begins the exploration of a new research paradigm for machine ethicists: a systematic focus on the mental representations and processes that produce commonsense moral judgments of the variety that all normally developed humans seem to be capable of. We assume that formally capturing the relevant conceptual repertoire along with developing properly parameterized inference mechanisms satisfy the necessary and sufficient conditions for building a machine equipped with something like robust moral commonsense. After discussing the various advantages and challenges of taking this particular tack on machine ethics, we explore a case study involving the interplay of intuitions about freedom, responsibility, and the self. Specifically, we examine recent results in experimental philosophy that provides a richer picture of the set of concepts involved in moral judgment, and speculate that some of the trends existing across the data are explicable in light of the cognitive architecture of mental state attribution or mindreading, as we shall refer to it. We suggest that along with machine ethicists working on the implementation of meta-ethical principles generated in the armchair, we ought to pursue the formalization of folk intuitions about freedom and agency to move us closer toward moral machines. So long as a robot has something like human folk beliefs about freedom and agency, and can deploy these believably in service of moral evaluation, it looks as if we might avoid the dispute about the correct (meta)ethic to adopt in favor of outright trickery: a fitting strategy for this celebration of Alan Turing's life and work.
展开▼