首页> 外文会议>IEEE International Conference on Imaging Systems and Techniques >A Deep Learning Approach for Analyzing Video and Skeletal Features in Sign Language Recognition
【24h】

A Deep Learning Approach for Analyzing Video and Skeletal Features in Sign Language Recognition

机译:一种深度学习方法,用于分析手语识别中的视频和骨骼特征

获取原文

摘要

Sign language recognition (SLR) refers to the classification of signs with a specific meaning performed by the deaf and/or hearing-impaired people in their everyday communication. In this work, we propose a deep learning based framework, in which we examine and analyze the contribution of video (image and optical flow) and skeletal (body, hand and face) features in the challenging task of isolated SLR, in which each signed video corresponds to a single word. Moreover, we employ various fusion schemes in order to identify the optimal way to combine the information obtained from the various feature representations and propose a robust SLR methodology. Our experimentation on two sign language datasets and the comparison with state-of-the-art SLR methods reveals the superiority of optimally combining skeletal and video features for SLR tasks.
机译:手语识别(SLR)是指由聋哑人和/或听力障碍者在日常交流中执行的具有特定含义的手语分类。在这项工作中,我们提出了一个基于深度学习的框架,在该框架中,我们检查并分析了视频(图像和光流)和骨骼(身体,手和脸)特征在孤立SLR的艰巨任务中所起的作用,其中每个特征视频对应一个单词。此外,我们采用各种融合方案来确定组合从各种特征表示中获得的信息的最佳方法,并提出一种可靠的SLR方法。我们在两个手语数据集上进行的实验以及与最新SLR方法的比较表明,为SLR任务最佳地结合了骨骼和视频功能的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号