首页>
外文OA文献
>Language Model Combination and Adaptation Using Weighted Finite State Transducers
【2h】
Language Model Combination and Adaptation Using Weighted Finite State Transducers
展开▼
机译:使用加权有限状态传感器的语言模型组合和自适应
展开▼
免费
页面导航
摘要
著录项
引文网络
相似文献
相关主题
摘要
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences
展开▼