While speech driven animation for lip-synching and facial expression synthesis from speech has previously received much attention [1, 2], there is little or no previous work on generating non-verbal actions such as laughing and crying automatically from an audio signal. In this article initial results on a system designed to address this issue are presented.
展开▼