首页> 外文会议>IEEE International Conference on Acoustics, Speech, and Signal Processing >INTRODUCING ARTICULATORY ANCHOR-POINT TO ANN TRAINING FOR CORRECTIVE LEARNING OF PRONUNCIATION
【24h】

INTRODUCING ARTICULATORY ANCHOR-POINT TO ANN TRAINING FOR CORRECTIVE LEARNING OF PRONUNCIATION

机译:介绍矫正学习的ANN培训的明晰度锚点

获取原文

摘要

We describe computer-assisted pronunciation training (CAPT) through the visualization of the articulatory gestures from learner's speech in this paper. Typical CAPT systems cannot indicate how the learner can correct his/her articulation. The proposed system enables the learner to study how to correct their pronunciation by comparing the wrongly pronounced gesture with a correctly pronounced gesture. In this system, a multi-layer neural network (MLN) is used to convert the learner's speech into the coordinates for a vocal tract using Magnetic Resonance Imaging data. Then, an animation is generated using the values of the vocal tract coordinates. Moreover, we improved the animations by introducing an anchor-point for a phoneme to MLN training. The new system could even generate accurate CG animations from the English speech by Japanese people in the experiment.
机译:通过在本文中,通过从学习者的演讲中的明晰度手势的可视化来描述计算机辅助的发音培训(CAPT)。典型的Capt系统无法指示学习者如何纠正他/她的关节。建议的系统使学习者能够研究如何通过比较正确明显的手势的错误发音的手势来研究如何纠正它们的发音。在该系统中,多层神经网络(MLN)用于使用磁共振成像数据将学习者的语音转换为声音的坐标。然后,使用声道沟坐标的值生成动画。此外,我们通过向MLN训练引入音素的锚点来改进动画。新系统甚至可以从实验中的日本人的英语演讲中生成准确的CG动画。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号