首页> 中文期刊> 《计算机技术与发展 》 >结合码本优化和特征融合的人体行为识别方法

结合码本优化和特征融合的人体行为识别方法

             

摘要

In order to improve the accuracy of human actions recognition in video sequence,we present an actions recognition method which combines two-level K-means clustering with video-level descriptor feature fusion.Firstly the space-time interest points extracted by video in training set are described by histogram of oriented gradient (HOG) and histograms optical flow (HOF),and the descriptors of different video and different kinds of motion video are formed their representative visual vocabulary respectively through K-means clustering with two levels,thus improving the expression of the codebook.Taking the descriptors of HOF and HOG as the input of the bag of word model respec-tively,the two different global expressions of video are obtained and fused in features.Due to the high correlation when the descriptors of HOG and HOF forming the characteristics of the video expression level,the fused features are distinguishing and robust in classification.Fi-nally,the support vector machine(SVM) is adopted for classification and recognition to characteristics of fusion.The experiments show that the proposed method can improve the accuracy of recognition effectively.%为了提高视频序列中人体行为识别的正确率,提出了一种结合两层K-means聚类优化码本和视频表达级特征融合的行为识别方法.首先对训练集视频提取出的时空兴趣点利用梯度方向直方图(HOG)和光流直方图(HOF)进行描述,并对属于不同视频以及不同种类动作视频的描述子使用两层K-means聚类形成各自更具有代表性的视觉词汇,从而提高码本的表达能力.然后将表示每个视频的HOG和HOF描述子分别作为码本优化后的词袋模型的输入,得到两种不同的视频全局表达并进行特征融合,由于HOG和HOF描述子在形成视频表达级特征时相关性较大,融合后的特征更具区分性和分类鲁棒性.最后采用支持向量机对融合后的特征进行分类识别.实验结果表明,该方法能够有效地提高识别率.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号