...
首页> 外文期刊>Neural computing & applications >A multi-stack RNN-based neural machine translation model for English to Pakistan sign language translation
【24h】

A multi-stack RNN-based neural machine translation model for English to Pakistan sign language translation

机译:A multi-stack RNN-based neural machine translation model for English to Pakistan sign language translation

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Sign languages are gesture-based languages used by the deaf community of the world. Every country has a different sign language and there are more than 200 sign languages in the world. American Sign Language (ASL), British Sign Language (BSL), and German Sign Language (DGS) are well-studied sign languages. Due to their different grammatical structure and word order the deaf people feel difficulty in reading and understanding the written text in natural languages. In order to enhance the cognitive ability of the deaf subjects some translation models have been developed for these languages that translate natural language text into corresponding sign language gestures. Most of the earlier natural to sign language translation models rely on rule-based approaches. Recently, some neural machine translation models have been proposed for ASL, BSL, DGS, and Arabic sign language. However, most of these models have low accuracy scores. This research provides an improved and novel multi-stack RNN-based neural machine translation model for natural to sign language translation. The proposed model is based on encoder–decoder architecture and incorporates attention mechanism and embeddings to improve the quality of results. Rigorous experimentation has been performed to compare the proposed multi-stack RNN-based model with baseline models. The experiments have been conducted using a sizeable translation corpus comprising of nearly 50,000 sentences for Pakistan Sign Language (PSL). The performance of the proposed neural machine translation model for PSL has been evaluated with the help of well-established evaluation measures including Bilingual Evaluation Understudy Score (BLEU), and Word Error Rate (WER). The results show that multi-stacked gated recurrent unit-based RNN model that employs Bahdanau attention mechanism and GloVe embedding performed the best showing the BLEU score of 0.83 and WER 0.17, which outperform the existing translation models. The proposed model has been exposed through a software system that converts the translated sentences into PSL gestures using an avatar. The evaluation of the usability has also been performed to see how effectively the avatar-based output helps compensating the cognitive hearing deficit for the deaf people. The results show that it works well for different granularity levels.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号