首页> 外文期刊>Information Forensics and Security, IEEE Transactions on >Recognizing Gaits on Spatio-Temporal Feature Domain
【24h】

Recognizing Gaits on Spatio-Temporal Feature Domain

机译:在时空特征域上识别步态

获取原文
获取原文并翻译 | 示例
           

摘要

Gait has been known as an effective biometric feature to identify a person at a distance, e.g., in video surveillance applications. Many methods have been proposed for gait recognitions from various different perspectives. It is found that these methods rely on appearance (e.g., shape contour, silhouette)-based analyses, which require preprocessing of foreground-background segmentation (FG/BG). This process not only causes additional time complexity, but also adversely influences performances of gait analyses due to imperfections of existing FG/BG methods. Besides, appearance-based gait recognitions are sensitive to several variations and partial occlusions, e.g., caused by carrying a bag and varying a cloth type. To avoid these limitations, this paper proposes a new framework to construct a new gait feature directly from a raw video. The proposed gait feature extraction process is performed in the spatio-temporal domain. The space-time interest points (STIPs) are detected by considering large variations along both spatial and temporal directions in local spatio-temporal volumes of a raw gait video sequence. Thus, STIPs are allocated, where there are significant movements of human body in both space and time. A histogram of oriented gradients and a histogram of optical flow are computed on a 3D video patch in a neighborhood of each detected STIP, as a STIP descriptor. Then, the bag-of-words model is applied on each set of STIP descriptors to construct a gait feature for representing and recognizing an individual gait. When compared with other existing methods in the literature, it has been shown that the performance of the proposed method is promising for the case of normal walking, and is outstanding for the case of partial occlusion caused by walking with carrying a bag and walking with varying a cloth type.
机译:步态被认为是一种有效的生物特征,例如在视频监视应用中,可以在远处识别一个人。从各种不同的角度提出了许多用于步态识别的方法。发现这些方法依赖于基于外观(例如,形状轮廓,轮廓)的分析,这需要对前景-背景分割(FG / BG)进行预处理。由于现有的FG / BG方法的不完善,该过程不仅会导致额外的时间复杂性,而且还会对步态分析的性能产生不利影响。此外,基于外观的步态识别对例如由携带袋子和改变衣服类型引起的若干变化和部分遮挡敏感。为了避免这些限制,本文提出了一个新框架,可以直接从原始视频中构建新的步态特征。提出的步态特征提取过程在时空域中执行。通过考虑原始步态视频序列的局部时空体积中沿空间和时间方向的大变化来检测时空兴趣点(STIP)。因此,分配了STIP,其中人体在空间和时间上都有很大的运动。在每个检测到的STIP附近,在3D视频补丁上计算定向梯度的直方图和光流的直方图,作为STIP描述符。然后,将词袋模型应用于每组STIP描述符,以构建用于表示和识别单个步态的步态特征。与文献中的其他现有方法相比,已表明该方法的性能在正常行走的情况下很有希望,而在因携带提包行走和变化的行走导致部分闭塞的情况下表现出色衣服类型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号