首页> 外文期刊>Expert Systems with Application >Robust people detection using depth information from an overhead Time-of-Flight camera
【24h】

Robust people detection using depth information from an overhead Time-of-Flight camera

机译:使用来自高架飞行时间相机的深度信息进行可靠的人员检测

获取原文
获取原文并翻译 | 示例

摘要

In this paper we describe a system for the automatic detection of multiple people in a scene, by only using depth information provided by a Time of Flight (ToF) camera placed in overhead position. The main contribution of this work lies in the proposal of a methodology for determining the Regions of Interest (ROI's) and feature extraction, which result in a robust discrimination between people with or without accessories and objects (either static or dynamic), even when people and objects are close together. Since only depth information is used, the developed system guarantees users' privacy. The designed algorithm includes two stages: an online stage; and an offline one. In the offline stage, a new depth image dataset has been recorded and labeled, and the labeled images have been used to train a classifier. The online stage is based on robustly detecting local maximums in the depth image (which are candidates to correspond to the head of the people present in the scene), from which a carefully ROI is defined around each of them. For each ROI, a feature vector is extracted, providing information on the top. view of people and objects, including information related to the expected overhead morphology of the head and shoulders. The online stage also includes a pre-filtering process, in order to reduce noise in the depth images. Finally, there is a classification process based on Principal Components Analysis (PCA). The online stage works in real time at an average of 150 fps. In order to evaluate the proposal, a wide experimental validation has been carried out, including different number of people simultaneously present in the scene, as well as people with different heights, complexions, and accessories. The obtained results are very satisfactory, with a 3.1% average error rate. (C) 2016 Elsevier Ltd. All rights reserved.
机译:在本文中,我们描述了一种仅通过使用放置在头顶位置的飞行时间(ToF)摄像机提供的深度信息来自动检测场景中多个人的系统。这项工作的主要贡献在于提出了一种用于确定感兴趣区域(ROI)和特征提取的方法的建议,这将导致在有或没有附件和物体(静态或动态)的人之间进行有力的区分,即使当人和物体靠在一起。由于仅使用深度信息,因此开发的系统可确保用户的隐私。所设计的算法包括两个阶段:在线阶段;在线阶段。还有一个离线的在离线阶段,已记录并标记了新的深度图像数据集,并且已标记的图像已用于训练分类器。在线阶段基于稳健地检测深度图像中的局部最大值(这些局部变量对应于场景中人物的头部),由此确定每个区域周围的ROI。对于每个ROI,提取特征向量,在顶部提供信息。人和物体的视图,包括与头和肩膀的预期头顶形态有关的信息。在线阶段还包括预过滤过程,以减少深度图像中的噪声。最后,有一个基于主成分分析(PCA)的分类过程。在线舞台以平均150 fps的速度实时工作。为了评估该建议,已进行了广泛的实验验证,包括场景中同时存在的不同人数的人,以及身高,肤色和配件不同的人。获得的结果非常令人满意,平均错误率为3.1%。 (C)2016 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号