【24h】

Text-Independent Speaker Verification Based on Triplet Loss

机译:基于三重态损失的与文本无关的说话人验证

获取原文

摘要

An improved end-to-end text-independent speaker verification model is proposed in this paper. LSTM networks are employed to extract the speaker model embedding, and the triplet loss is used to optimize the training of the network which make the training of the speaker verification model more efficient while keep the computation complexity relatively low. With the triplet loss, the proposed model can achieve better EER performance.
机译:本文提出了一种改进的端到端独立于文本的说话人验证模型。 LSTM网络用于提取说话人模型嵌入,三元组损失用于优化网络训练,这使得说话人验证模型的训练更加有效,同时保持了较低的计算复杂度。利用三重态损失,所提出的模型可以实现更好的EER性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号