首页> 外文会议>International Joint Conference on Neural Networks >A Multimodal Deep Learning Network for Group Activity Recognition
【24h】

A Multimodal Deep Learning Network for Group Activity Recognition

机译:组活动识别的多模式深度学习网络

获取原文

摘要

Several studies focused on single human activity recognition, while the classification of group activities is still under-investigated. In this paper, we present an approach for classifying the activity performed by a group of people during daily life tasks at work. We address the problem in a hierarchical way by first examining individual person actions, reconstructed from data coming from wearable and ambient sensors. We then observe if common temporal/spatial dynamics exist at the level of group activity. We deployed a Multimodal Deep Learning Network, where the term multimodal is not intended to separately elaborate the considered different input modalities, but refers to the possibility of extracting activity-related features for each group member, and then merge them through shared levels. We evaluated the proposed approach in a laboratory environment, where the employees are monitored during their normal activities. The experimental results demonstrate the effectiveness of the proposed model with respect to an SVM benchmark.
机译:几项研究专注于单身人类活动识别,而群体活动的分类仍在调查中。在本文中,我们提出了一种在工作期间在日常生活任务期间对一群人进行分类的方法。通过首次检查单个人的行动,以分层方式解决问题,从来自可穿戴和环境传感器的数据重建。然后,如果在组活动的级别存在常见的时间/空间动态,则观察。我们部署了一个多模式深度学习网络,其中术语多模式不旨在单独详细阐述所考虑的不同输入模态,但是指针对每个组成员提取与活动相关的功能的可能性,然后通过共享级别合并它们。我们在实验室环境中评估了拟议的方法,员工在正常活动期间监测。实验结果表明了所提出的模型对SVM基准的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号