首页> 外文会议>International Joint Conference on Neural Networks >Interpretable Parallel Recurrent Neural Networks with Convolutional Attentions for Multi-Modality Activity Modeling
【24h】

Interpretable Parallel Recurrent Neural Networks with Convolutional Attentions for Multi-Modality Activity Modeling

机译:具有卷积注意的可解释并行并行递归神经网络用于多模式活动建模

获取原文

摘要

Multimodal features play a key role in wearable sensor based human activity recognition (HAR). Selecting the most salient features adaptively is a promising way to maximize the effectiveness of multimodal sensor data. In this regard, we propose a “collect fully and select wisely” principle as well as an interpretable parallel recurrent model with convolutional attentions to improve the recognition performance. We first collect modality features and the relations between each pair of features to generate activity frames, and then introduce an attention mechanism to select the most prominent regions from activity frames precisely. The selected frames not only maximize the utilization of valid features but also reduce the number of features to be computed effectively. We further analyze the accuracy and interpretability of the proposed model based on extensive experiments. The results show that our model achieves competitive performance on two benchmarked datasets and works well in real life scenarios.
机译:多模式特征在基于可穿戴传感器的人类活动识别(Har)中起着关键作用。选择最突出的特征是一种有希望的方法,可以最大化多模式传感器数据的有效性。在这方面,我们提出了“完全收集并选择明智地”的原则,以及具有卷积关注的可解释的并行反复模型,以提高识别性能。我们首先收集模态特征和每对功能之间的关系来生成活动帧,然后引入注意机制,以精确地从活动帧中选择最突出的区域。所选帧不仅可以最大化有效功能的利用率,还可以减少有效计算的功能数量。我们进一步分析了基于广泛实验的提出模型的准确性和可解释性。结果表明,我们的模型在两个基准数据集中实现了竞争性能,并在现实生活方案中运作良好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号