Laughter is an important social signal which may have various communicative functions (Chapman 1983). Humans laugh at humorous stimuli or to mark their pleasure when receiving praised statements (Provine 2001); they also laugh to mask embarrassment (Huber and Ruch 2007) or to be cynical. Laughter can also act as social indicator of in-group belonging (Adelsward 1989); it can work as speech regulator during conversation (Provine 2001); it can also be used to elicit laughter in interlocutors as it is very contagious (Provine 2001). Endowing machines with laughter capabilities is a crucial challenge to develop virtual agents and robots able to act as companions, coaches, or supporters in a more natural manner. However, so far, few attempts have been made to model and implement laughter for virtual agents and robots. In our demo, LoL, a user interacts with a virtual agent able to copy and to adapt its laughing and expressive behaviors on-the-fly. Our aim is to study copying capabilities participate in enhancing user's experience in the interaction. User listens to funny audio stimuli in the presence of a laughing agent: when funniness of audio increases, the agent laughs and the quality of its body movement (direction and amplitude of laughter movements) is modulated on-the-fly by user's body features. The architecture of LoL is showed in Figure 1 and exploits two main modules: the Detection Platform is implemented with EyesWeb XMI, a modular application that allows both experts (e.g., researchers in computer engineering) and non-expert users (e.g., artists) to create multimodal applications in a visual way (Mancini et al. 2014); the Virtual Agent is designed using the Greta agent platform (Niewiadomski et al. 2009). The two modules communicate via ActiveMQ messages.
展开▼