In this paper we propose a new HMM-based framework for the exploration of realtime gesture-to-gesture mapping strategies. This framework enables the realtime HMM-based recognition of a given gesture sequence from a subset of its dimensions, the covariance-based mapping of the gesture stylistics from this subset onto the remaining dimensions and the realtime synthesis of the remaining dimensions from their corresponding HMMs. This idea has been embedded into a proof-of-concept prototype that "reconstructs" the lower-body dimensions of a walking sequence from the upper-body gestures in realtime. In order to achieve this reconstruction, we adapt various machine learning tools from the speech processing research. Notably we have adapted the HTK toolkit to motion capture data and modified MAGE, a HTS-based library for reactive speech synthesis, to accommodate our use case. We have also adapted a covariance-based mapping strategy used in the articulatory inversion process of silent speech interfaces to the case of transferring stylistic information from the upper- to the lower-body statistical models. The main achievement of this work is to show that this reconstruction process applies the inherent stylistics of the input gestures onto the synthesized motion thanks to the mapping function applied at the state level.
展开▼