首页> 外文会议>2018 13th IEEE International Conference on Automatic Face amp; Gesture Recognition >Toward Marker-Free 3D Pose Estimation in Lifting: A Deep Multi-View Solution
【24h】

Toward Marker-Free 3D Pose Estimation in Lifting: A Deep Multi-View Solution

机译:在提升中实现无标记的3D姿势估计:一种深层的多视图解决方案

获取原文
获取原文并翻译 | 示例

摘要

Lifting is a common manual material handling task performed in the workplaces. It is considered as one of the main risk factors for Work-related Musculoskeletal Disorders. To improve work place safety, it is necessary to assess musculoskeletal and biomechanical risk exposures associated with these tasks, which requires very accurate 3D pose. Existing approaches mainly utilize marker-based sensors to collect 3D information. However, these methods are usually expensive to setup, timeconsuming in process, and sensitive to the surrounding environment. In this study, we propose a multi-view based deep perceptron approach to address aforementioned limitations. Our approach consists of two modules: a "view-specific perceptron" network extracts rich information independently from the image of view, which includes both 2D shape and hierarchical texture information; while a "multi-view integration" network synthesizes information from all available views to predict accurate 3D pose. To fully evaluate our approach, we carried out comprehensive experiments to compare different variants of our design. The results prove that our approach achieves comparable performance with former marker-based methods, i.e. an average error of 14:72 ± 2:96 mm on the lifting dataset. The results are also compared with state-of-the-art methods on HumanEva-I dataset [1], which demonstrates the superior performance of our approach.
机译:提升是在工作场所执行的常见手动物料搬运任务。它被认为是与工作有关的肌肉骨骼疾病的主要危险因素之一。为了提高工作场所的安全性,有必要评估与这些任务相关的肌肉骨骼和生物力学风险暴露,这需要非常准确的3D姿势。现有方法主要利用基于标记的传感器来收集3D信息。但是,这些方法通常设置昂贵,过程耗时且对周围环境敏感。在这项研究中,我们提出了一种基于多视图的深度感知器方法来解决上述局限性。我们的方法包括两个模块:“特定于视图的感知器”网络独立于视图图像提取丰富的信息,其中包括2D形状和分层纹理信息;而“多视图集成”网络会综合所有可用视图中的信息,以预测准确的3D姿态。为了全面评估我们的方法,我们进行了综合实验以比较设计的不同变体。结果证明,我们的方法与以前的基于标记的方法具有可比的性能,即提升数据集的平均误差为14:72±2:96 mm。还将结果与HumanEva-I数据集上的最新方法进行比较[1],这证明了我们方法的优越性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号