首页> 外文会议>IIAI International Congress on Advanced Applied Informatics >Lip-Movement Based Speaker Recognition Focused on the Distributed Structure of Lip-Movement Data
【24h】

Lip-Movement Based Speaker Recognition Focused on the Distributed Structure of Lip-Movement Data

机译:基于口述运动数据的分布式结构的基于口述运动的说话人识别

获取原文

摘要

A speaker recognition method is presented that uses the distributed structure of lip-movement data. It overcomes the degradation in accuracy of a previous method based on the kernel mutual subspace method due to the genuine samples being close to those of other persons. The degradation in accuracy results from the strong nonlinearity of the distribution. This degradation is reduced by increasing the weights of the samples near the cluster centers. Evaluation of the proposed method demonstrated that it outperforms other person authentication methods based on lip movement.
机译:提出了一种说话人识别方法,该方法使用唇动数据的分布式结构。它克服了由于真实样本与其他人的样本接近而导致的基于核互子空间法的先前方法的准确性下降。精度的下降是由于分布的强烈非线性所致。通过增加簇中心附近样本的权重可以减少这种退化。对提出的方法的评估表明,该方法优于基于嘴唇移动的其他人的身份验证方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号