We investigate the use of iVector-based rapid adaptation for recognition in mobile speech applications. We show that on this task, the proposed approach has two merits over a linear-transform based approach. First it provides larger error reductions (11% vs. 6%) as it is better suited for the short utterances and varied recording conditions. Second it omits the need for speaker data pooling and/or clustering and the very large infrastructure complexity that accompanies that. Empirical results show that although the proposed utterance-based training algorithm leads to large data fragmentation, the resulting model re-estimation performs well. Our implementation within the MapReduce framework allows processing of the large statistics that this approach gives rise to when applied on a database of thousands of hours.
展开▼