首页> 外文期刊>Journal of visual communication & image representation >A novel dynamic gesture understanding algorithm fusing convolutional neural networks with hand-crafted features
【24h】

A novel dynamic gesture understanding algorithm fusing convolutional neural networks with hand-crafted features

机译:一种融合了卷积神经网络和手工特征的新型动态手势理解算法

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Dynamic gestures have attracted much attention in recent years due to their user-friendly interactive characteristics. However, accurate and efficient dynamic gesture understanding remains a challenge due to complex scenarios and motion information. Conventional handcrafted features are computationally cheap but can only extract low-level image features. This leads to performance degradation when dealing with complex scenes. In contrast, deep learning-based methods have a stronger feature expression ability and hence can capture more abstract and high-level image features. However, they critically rely on a large amount of training data. To address the above issues, a novel dynamic gesture understanding algorithm based on feature fusion is proposed for accurate dynamic gesture prediction. It leverages the advantages of handcrafted features and transfer learning. Aimed at small-scale dynamic gesture data, transfer learning is introduced for capturing effective feature expression. To precisely model the critical temporal information associated with dynamic gestures, a novel feature descriptor, namely, AlexNet(2), is proposed for effective feature expression of dynamic gestures from the spatial and temporal domain. On this basis, a decision-level feature fusion framework based on support vector machine (SVM) and Dempster-Shafer (DS) evidence theory is constructed to utilize handcrafted features and AlexNet(2) to realize high-precision dynamic gesture understanding. To verify the effectiveness and robustness of the proposed recognition algorithm, analysis and comparison experiments are performed on the public Cambridge gesture dataset and Northwestern University hand gesture dataset. The proposed gesture recognition algorithm achieves prediction accuracies of 99.50 and 96.97 on these two datasets. Experimental results show that the proposed recognition framework exhibits a better recognition performance in comparison with related prediction algorithms.
机译:近年来,动态手势因其人性化的交互特性而备受关注。然而,由于场景和运动信息复杂,准确高效的动态手势理解仍然是一个挑战。传统的手工制作特征在计算上成本低廉,但只能提取低级图像特征。这会导致在处理复杂场景时性能下降。相比之下,基于深度学习的方法具有更强的特征表达能力,因此可以捕获更抽象和高级的图像特征。然而,它们严重依赖于大量的训练数据。针对上述问题,该文提出一种基于特征融合的动态手势理解算法,用于动态手势的准确预测。它利用了手工制作的功能和迁移学习的优势。针对小规模动态手势数据,引入迁移学习来捕捉有效的特征表达。为了精确模拟与动态手势相关的关键时间信息,提出了一种新的特征描述符AlexNet(2),用于从空间和时间域对动态手势进行有效的特征表达。在此基础上,构建了基于支持向量机(SVM)和Dempster-Shafer(DS)证据理论的决策级特征融合框架,利用手工特征和AlexNet(2)实现高精度动态手势理解。为了验证所提识别算法的有效性和鲁棒性,在公开的剑桥手势数据集和西北大学手势数据集上进行了分析和对比实验。所提出的手势识别算法在这两个数据集上的预测准确率分别为99.50%和96.97%。实验结果表明,与相关预测算法相比,所提识别框架表现出更好的识别性能。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号