首页> 外文会议>IEEE International Conference on Multimedia and Expo >Time-ordered spatial-temporal interest points for human action classification
【24h】

Time-ordered spatial-temporal interest points for human action classification

机译:用于人类动作分类的按时间顺序排列的时空兴趣点

获取原文

摘要

Human action classification, which is vital for content-based video retrieval and human-machine interaction, finds problem in distinguishing similar actions. Previous works typically detect spatial-temporal interest points (STIPs) from action sequences and then adopt bag-of-visual words (BoVW) model to describe actions as numerical statistics of STIPs. Despite the robustness of BoVW, this model ignores the spatial-temporal layout of STIPs, leading to misclassification among different types of actions with similar numerical statistics of STIPs. Motivated by this, a time-ordered feature is designed to describe the temporal distribution of STIPs, which contains complementary structural information to traditional BoVW model. Moreover, a temporal refinement method is used to eliminate intra-variations among time-ordered features caused by performers' habits. Then a time-ordered BoVW model is built to represent actions, which encodes both numerical statistics and temporal distribution of STIPs. Extensive experiments on three challenging datasets, i.e., KTH, Rochster and UT-Interaction, validate the effectiveness of our method in distinguishing similar actions.
机译:人为动作分类对于基于内容的视频检索和人机交互至关重要,它在区分相似动作时会发现问题。先前的工作通常从动作序列中检测时空兴趣点(STIP),然后采用视觉袋词(BoVW)模型将动作描述为STIP的数值统计。尽管BoVW具有鲁棒性,但该模型忽略了STIP的时空布局,从而导致在具有类似STIP数值统计的不同类型的动作之间进行错误分类。出于此目的,设计了一种按时间排序的功能来描述STIP的时间分布,其中包含了传统BoVW模型的补充结构信息。此外,使用时间细化方法来消除由表演者的习惯引起的时间顺序特征之间的内部变化。然后建立一个按时间顺序排列的BoVW模型来表示动作,该模型对STIP的数值统计和时间分布进行编码。在三个具有挑战性的数据集(即KTH,Rochster和UT-Interaction)上进行的广泛实验验证了我们的方法在区分相似动作方面的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号