The paper presents the results of recent experiments with audio-visual speech recognition for two popular Slavonic languages: Russian and Czech. The description of test applied tasks, the process of multimodal databases collection and data pre-processing, methods for visual features extraction (geometric shape-based features; DCT and PCA pixel-based visual parameterization) as well as models of audio-visual recognition (concatenation of feature vectors and multi-stream models) are described. The prototypes of applied systems which will use the audio-visual speech recognition engine are mainly directed to the market of intellectual applications such as inquiry machines, video conference communications, moving objects control in noisy environments, etc.
展开▼