【24h】

Human action recognition from a single clip per action

机译:从每个动作的一个剪辑中识别人的动作

获取原文
获取原文并翻译 | 示例

摘要

Learning-based approaches for human action recognition often rely on large training sets. Most of these approaches do not perform well when only a few training samples are available. In this paper, we consider the problem of human action recognition from a single clip per action. Each clip contains at most 25 frames. Using a patch based motion descriptor and matching scheme, we can achieve promising results on three different action datasets with a single clip as the template. Our results are comparable to previously published results using much larger training sets. We also present a method for learning a transferable distance function for these patches. The transferable distance function learning extracts generic knowledge of patch weighting from previous training sets, and can be applied to videos of new actions without further learning. Our experimental results show that the transferable distance function learning not only improves the recognition accuracy of the single clip action recognition, but also significantly enhances the efficiency of the matching scheme.
机译:基于学习的人类动作识别方法通常依赖于大型培训集。当只有少数训练样本可用时,这些方法中的大多数效果都不佳。在本文中,我们从每个动作的单个片段考虑人类动作识别的问题。每个剪辑最多包含25帧。使用基于补丁的运动描述符和匹配方案,我们可以以单个剪辑为模板在三个不同的动作数据集上取得可喜的结果。我们的结果与以前发表的使用大得多的训练集的结果相当。我们还提出了一种学习这些补丁的可转移距离函数的方法。可转移距离函数学习从先前的训练集中提取了补丁权重的一般知识,无需进一步学习即可将其应用于新动作的视频。我们的实验结果表明,可转移距离函数学习不仅提高了单剪辑动作识别的识别精度,而且还大大提高了匹配方案的效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号