Everyday human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, body pose or gestures. Facial expressions are one of the main communication mechanisms and pass large amounts of information between human dialogue partners [22]. Therefore, the analysis and the synthesis of facial expressions are important steps towards an intuitive human-machine interaction and form valuable research targets. We present a system that tackles both challenges. It relies on a fully automated, model-based, real-time capable approach to distinguish universal facial expressions and their intensities from camera images. Facial expression synthesis is conducted via the robot head EDDIE, a flexible low-cost emotion-display with 23 degrees of freedom. Static facial expressions at continuous intensities are included, as well as smooth transitions based on the circumplex model of affect. Miniature off-the-shelf mechatronic components are used to provide high functionality at low cost. Evaluations conducted in a user-study show that emotions displayed by EDDIE are recognizable by humans very well. By combining facial expression recognition and display on the robot, a first demonstration is presented in which the robot mirrors the human's emotions, as a basis for further research in the field of emotional closed loop systems.
展开▼