首页> 外文会议>International Joint Conference on Neural Networks >Fusion of Multiple Representations Extracted from a Single Sensor’s Data for Activity Recognition Using CNNs
【24h】

Fusion of Multiple Representations Extracted from a Single Sensor’s Data for Activity Recognition Using CNNs

机译:融合从单个传感器的数据中提取的多个表示,以使用CNN进行活动识别

获取原文
获取外文期刊封面目录资料

摘要

With the emerging ubiquitous sensing field, it has become possible to build assistive technologies for persons during their daily life activities to provide personalized feedback and services. For instance, it is possible to detect an individual’s behavioral information (e.g. physical activity, location, and mood) by using sensors embedded in smartwatches and smartphones. To detect human’s daily life activities, accelerometers have been widely used in wearable devices. In the current research, usually a single data representation is used, i.e., either image or feature vector representations. In this paper, a novel method is proposed to address two key aspects for the future development of robust deep learning methods for Human Activity Recognition (HAR): (1) multiple representations of a single sensor’s data and (2) fusion of these multiple representations. The presented method utilizes Deep Convolutional Neural Networks (CNNs) and was evaluated using a publicly available HAR dataset. The proposed method showed promising performance, with the best result reaching an overall accuracy of 0.97, which outperforms current conventional approaches.
机译:随着新兴的无处不在的传感领域,为人们的日常生活建立辅助技术成为可能,以提供个性化的反馈和服务。例如,可以使用嵌入在智能手表和智能手机中的传感器来检测个人的行为信息(例如,身体活动,位置和情绪)。为了检测人们的日常生活活动,加速度计已广泛用于可穿戴设备中。在当前的研究中,通常使用单个数据表示,即图像或特征矢量表示。本文提出了一种新颖的方法来解决人类活动识别(HAR)健壮的深度学习方法未来发展的两个关键方面:(1)单个传感器数据的多种表示形式;(2)这些多种表示形式的融合。提出的方法利用深度卷积神经网络(CNN),并使用可公开获得的HAR数据集进行了评估。所提出的方法显示出令人鼓舞的性能,最佳结果达到了0.97的总体精度,优于目前的常规方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号