首页> 外文期刊>Human-Machine Systems, IEEE Transactions on >Improving Human Action Recognition Using Fusion of Depth Camera and Inertial Sensors
【24h】

Improving Human Action Recognition Using Fusion of Depth Camera and Inertial Sensors

机译:深度相机和惯性传感器的融合提高了人类的动作识别能力

获取原文
获取原文并翻译 | 示例

摘要

This paper presents a fusion approach for improving human action recognition based on two differing modality sensors consisting of a depth camera and an inertial body sensor. Computationally efficient action features are extracted from depth images provided by the depth camera and from accelerometer signals provided by the inertial body sensor. These features consist of depth motion maps and statistical signal attributes. For action recognition, both feature-level fusion and decision-level fusion are examined by using a collaborative representation classifier. In the feature-level fusion, features generated from the two differing modality sensors are merged before classification, while in the decision-level fusion, the Dempster–Shafer theory is used to combine the classification outcomes from two classifiers, each corresponding to one sensor. The introduced fusion framework is evaluated using the Berkeley multimodal human action database. The results indicate that because of the complementary aspect of the data from these sensors, the introduced fusion approaches lead to 2% to 23% recognition rate improvements depending on the action over the situations when each sensor is used individually.
机译:本文提出了一种融合方法,它基于两个不同的模态传感器(包括深度相机和惯性人体传感器)来改善人类动作识别。从深度相机提供的深度图像和惯性人体传感器提供的加速度计信号中提取出计算有效的动作特征。这些功能包括深度运动图和统计信号属性。对于动作识别,通过使用协作表示分类器来检查特征级融合和决策级融合。在特征级融合中,将从两个不同模态传感器生成的特征在分类之前进行合并,而在决策级融合中,Dempster-Shafer理论用于组合来自两个分类器的分类结果,每个分类器对应一个传感器。引入的融合框架是使用伯克利多模式人类行为数据库进行评估的。结果表明,由于来自这些传感器的数据的互补性,根据每个传感器单独使用时的情况所采取的措施,引入的融合方法可将识别率提高2%至23%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号