Modelling of facial dynamics, as well as recovering of latent dimensions that correspond to facial dynamics is of paramount importance for many tasks relevant to facial behaviour analysis. Currently, analysis of facial dynamics is performed by applying linear techniques, mainly, on sparse facial tracks. In this, paper we propose the first, to the best of our knowledge, methodology for extracting low-dimensional latent dimensions that correspond to facial dynamics (i.e., motion of facial parts). To this end we develop appropriate unsupervised and supervised deep autoencoder architectures, which are able to extract features that correspond to the facial dynamics. We demonstrate the usefulness of the proposed approach in various facial behaviour datasets.
展开▼