首页> 外文会议>International Conference on Augmented Reality, Virual Reality and Computer Graphics >Generation of Action Recognition Training Data Through Rotoscoping and Augmentation of Synthetic Animations
【24h】

Generation of Action Recognition Training Data Through Rotoscoping and Augmentation of Synthetic Animations

机译:通过旋转动画和合成动画的增强来生成动作识别训练数据

获取原文

摘要

In this paper, we present a method to synthetically generate the training material needed by machine learning algorithms to perform human action recognition from 2D videos. As a baseline pipeline, we consider a 2D video stream passing through a skeleton extractor (Open-Pose), whose 2D joint coordinates are analyzed by a random forest. Such a pipeline is trained and tested using real live videos. As an alternative approach, we propose to train the random forest using automatically generated 3D synthetic videos. For each action, given a single reference live video, we edit a 3D animation (in Blender) using the rotoscoping technique. This prior animation is then used to produce a full training set of synthetic videos via perturbation of the original animation curves. Our tests, performed on live videos, show that our alternative pipeline leads to comparable accuracy, with the advantage of drastically reducing both the human effort and the computing power needed to produce the live training material.
机译:在本文中,我们提出了一种方法来综合生成机器学习算法所需的训练材料,以便从2D视频中进行人类动作识别。作为基线管道,我们考虑通过骨架提取器(Open-Pose)的2D视频流,该提取器的2D联合坐标由随机森林分析。使用真实的实时视频对这种管道进行了培训和测试。作为一种替代方法,我们建议使用自动生成的3D合成视频来训练随机森林。对于每个动作,给定单个参考实时视频,我们使用旋转范围技术来编辑3D动画(在Blender中)。然后,通过对原始动画曲线进行扰动,可以使用此先前的动画来生成完整的合成视频训练集。我们在实时视频上进行的测试表明,我们的替代管道可以带来可比的准确性,其优势在于可以大大减少制作实时培训材料所需的人力和计算能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号