首页> 外文会议>International Conference on Computational Intelligence and Knowledge Economy >Deep-Hand: A Deep Inference Vision Approach of Recognizing a Hand Sign Language using American Alphabet
【24h】

Deep-Hand: A Deep Inference Vision Approach of Recognizing a Hand Sign Language using American Alphabet

机译:深手:使用美国字母表识别手册语的深度推理视觉方法

获取原文

摘要

Sign-Language is to help people with hearing or speaking disabilities who are not able to communicate well with other people. Communicating with deaf people is a challenge for some speakers and people who do not know sign language. What the study proposed is to help people with such disabilities using the American Sign-Language with the corresponding hand gesture. Deaf individuals will be able to communicate or interact with other people conveniently. The study proposed hand gesture or hand sign language detection trained by using the YOLOv3 algorithm that aims to detect hand gestures or hand sign language that can recognize its equivalent letter alphabet. The study tools such as LabelImg for annotating the data set, categorizing each image of hand gestures based on their equivalent letter alphabet. In this study, Model 18 with 95.1804% training accuracy, 90.8242% validation accuracy, and mAP of 0.8275 is used for the final testing. As video with different hand gestures is presented, the results of every hand gesture detected range over 90%.
机译:标志语言是帮助人们与听力或说话的人无法与其他人沟通。与聋人沟通对某些扬声器和不知道手语的人来说是一个挑战。研究所提出的研究是帮助使用与相应的手势的美国标志语言的残疾人。聋人能够方便地与其他人沟通或互动。该研究提出了通过使用Yolov3算法训练的手势或手掌检测,该算法旨在检测可以识别其等效字母字母表的手势或手册语言。如LabelImg,用于注释数据集的研究工具,基于其等效字母字母对手势的每个图像进行分类。在本研究中,18型培训精度为95.1804%,验证精度为90.8242%和0.8275的映射用于最终测试。随着具有不同手势手势的视频,每个手势的结果都检测到90%以上的范围。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号