首页> 中文期刊> 《计算机技术与发展》 >基于混合时空特征描述子的人体动作识别

基于混合时空特征描述子的人体动作识别

         

摘要

针对基于局部时空特征的行为识别中获取高效兴趣点、合理描述兴趣点及表征运动特征等关键问题,提出一种基于混合时空特征和SOM网络的新的行为识别框架.首先,从输入视频中提取出多尺度的Dollar时空兴趣点,并由时空兴趣点提取用于描述局部运动区域的视频块.然后,提出多向投影的光流直方图(DPHOF)构造方法,并与3D梯度方向直方图(HOG3D)结合描述视频块;利用SOM构造全局视频描述子.最后,用K最近邻(KNN)进行分类.对该方法在KTH和UCF-YT数据集上进行了验证,取得了很好的识别效果.实验结果表明,提出的DPHOF描述符能高效表示时空兴趣点,并优于HOG3D和HOF的描述性,且由SOM构造出的全局视频描述子可以高效地表示视频特征,该方法具有更好的识别结果.%In view of key problems like efficient obtainment and reasonable description of interest points,and characterization of movement in human action recognition based on local features of time and space,we present a new action recognition framework based on mixed space-time feature and SOM.Firstly,the multi-scale Dollar's spatio-temporal interest points are extracted from the input video,and then the video block of describing local motion region is extracted by means of spatio-temporal interest points.Furthermore,we propose a novel multidirec-tional projection optical flow histogram(DPHOF) descriptor to represent the video volume combined with the orientation histograms of 3D gradient orientations(3DHOG) and use SOM to generate the global video descriptor.Finally,the KNN is employed as classifier.This meth-od is validated on the KTH and UCF-YT datasets with good recognition results.Experiment shows that the DPHOF descriptor proposed can efficiently represent the spatio-temporal interest points and is better than HOG3D and HOF.And the global video descriptor constructed by SOM can express the video features efficiently.The proposed method has better recognition effect.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号