首页> 外文会议>International Conference on Intelligent Autonomous Systems >Real-Time Marker-Less Multi-person 3D Pose Estimation in RGB-Depth Camera Networks
【24h】

Real-Time Marker-Less Multi-person 3D Pose Estimation in RGB-Depth Camera Networks

机译:RGB深度相机网络中的实时标记多人3D姿态估计

获取原文

摘要

This paper proposes a novel system to estimate and track the 3D poses of multiple persons in calibrated RGB-Depth camera networks. The multi-view 3D pose of each person is computed by a central node which receives the single-view outcomes from each camera of the network. Each single-view outcome is computed by using a CNN for 2D pose estimation and extending the resulting skeletons to 3D by means of the sensor depth. The proposed system is marker-less, multi-person, independent of background and does not make any assumption on people appearance and initial pose. The system provides real-time outcomes, thus being perfectly suited for applications requiring user interaction. Experimental results show the effectiveness of this work with respect to a baseline multi-view approach in different scenarios. To foster research and applications based on this work, we released the source code in OpenPTrack, an open source project for RGB-D people tracking.
机译:本文提出了一种新的系统来估计和跟踪校准的RGB深度摄像机网络中多人的3D姿势。每个人的多视图3D姿势由中央节点计算,该中心节点从网络的每个相机接收单视角。通过使用用于2D姿势估计的CNN来计算每个单视图结果,并通过传感器深度将得到的骨架扩展到3D。建议的系统是标记的,多人,独立于背景,并没有对人们出现的任何假设和初始姿势。该系统提供实时结果,因此适合需要用户交互的应用。实验结果表明,这项工作在不同场景中对基线多视图方法的有效性。为了促进基于这项工作的研究和应用程序,我们发布了OpenPtrack的源代码,该源代码是RGB-D人民跟踪的开源项目。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号