首页> 外文会议>International Joint Conference on Neural Networks >A Multimodal Deep Learning Network for Group Activity Recognition
【24h】

A Multimodal Deep Learning Network for Group Activity Recognition

机译:用于团队活动识别的多模式深度学习网络

获取原文

摘要

Several studies focused on single human activity recognition, while the classification of group activities is still under-investigated. In this paper, we present an approach for classifying the activity performed by a group of people during daily life tasks at work. We address the problem in a hierarchical way by first examining individual person actions, reconstructed from data coming from wearable and ambient sensors. We then observe if common temporal/spatial dynamics exist at the level of group activity. We deployed a Multimodal Deep Learning Network, where the term multimodal is not intended to separately elaborate the considered different input modalities, but refers to the possibility of extracting activity-related features for each group member, and then merge them through shared levels. We evaluated the proposed approach in a laboratory environment, where the employees are monitored during their normal activities. The experimental results demonstrate the effectiveness of the proposed model with respect to an SVM benchmark.
机译:几项研究侧重于对单个人类活动的识别,而对团体活动的分类仍在研究中。在本文中,我们提出了一种对一群人在日常工作任务中所执行的活动进行分类的方法。我们通过首先检查个人行为来分层解决问题,该行为是根据可穿戴式和环境传感器的数据重建而成的。然后,我们观察小组活动水平上是否存在常见的时空动态。我们部署了一个多模式深度学习网络,其中“多模式”一词并非旨在单独阐述所考虑的不同输入模式,而是指为每个组成员提取与活动相关的功能,然后通过共享级别将它们合并的可能性。我们在实验室环境中对提议的方法进行了评估,在实验室环境中对员工的日常活动进行了监控。实验结果证明了所提模型相对于SVM基准的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号