首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video
【2h】

Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video

机译:通过视频自动评估人类运动行为的特定于目标的动作分类

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Objective monitoring and assessment of human motor behavior can improve the diagnosis and management of several medical conditions. Over the past decade, significant advances have been made in the use of wearable technology for continuously monitoring human motor behavior in free-living conditions. However, wearable technology remains ill-suited for applications which require monitoring and interpretation of complex motor behaviors (e.g., involving interactions with the environment). Recent advances in computer vision and deep learning have opened up new possibilities for extracting information from video recordings. In this paper, we present a hierarchical vision-based behavior phenotyping method for classification of basic human actions in video recordings performed using a single RGB camera. Our method addresses challenges associated with tracking multiple human actors and classification of actions in videos recorded in changing environments with different fields of view. We implement a cascaded pose tracker that uses temporal relationships between detections for short-term tracking and appearance based tracklet fusion for long-term tracking. Furthermore, for action classification, we use pose evolution maps derived from the cascaded pose tracker as low-dimensional and interpretable representations of the movement sequences for training a convolutional neural network. The cascaded pose tracker achieves an average accuracy of 88% in tracking the target human actor in our video recordings, and overall system achieves average test accuracy of 84% for target-specific action classification in untrimmed video recordings.
机译:客观监测和评估人类运动行为可以改善多种医学状况的诊断和管理。在过去的十年中,在可穿戴技术的使用方面取得了重大进展,该技术可用于连续监测自由生活条件下的人体运动行为。但是,可穿戴技术仍然不适用于需要监视和解释复杂运动行为(例如,涉及与环境的相互作用)的应用。计算机视觉和深度学习的最新进展为从录像中提取信息开辟了新的可能性。在本文中,我们提出了一种基于视觉的行为分级表型方法,用于对使用单个RGB摄像机执行的视频录制中的基本人类动作进行分类。我们的方法解决了与跟踪多个人类演员以及在变化的环境中以不同视场录制的视频中的动作分类相关的挑战。我们实现了一个级联的姿势跟踪器,该跟踪器将检测之间的时间关系用于短期跟踪,并将基于外观的小波融合用于长期跟踪。此外,对于动作分类,我们使用从级联姿势跟踪器派生的姿势演化图作为运动序列的低维和可解释表示,以训练卷积神经网络。级联的姿势跟踪器在我们的视频记录中跟踪目标人类演员时,平均准确度达到88%,整个系统在未修剪的视频记录中针对目标特定动作分类的平均测试准确度达到84%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号