首页> 外文会议>European conference on computer vision >Direct-from-Video: Unsupervised NRSfM
【24h】

Direct-from-Video: Unsupervised NRSfM

机译:直接从 - 视频:无人监督的NRSFM

获取原文

摘要

In this work we describe a novel approach to online dense non-rigid structure from motion. The problem is reformulated, incorporating ideas from visual object tracking, to provide a more general and unified technique, with feedback between the reconstruction and point-tracking algorithms. The resulting algorithm overcomes the limitations of many conventional techniques, such as the need for a reference image/template or precomputed trajectories. The technique can also be applied in traditionally challenging scenarios, such as modelling objects with strong self-occlusions or from an extreme range of viewpoints. The proposed algorithm needs no offline pre-learning and does not assume the modelled object stays rigid at the beginning of the video sequence. Our experiments show that in traditional scenarios, the proposed method can achieve better accuracy than the current state of the art while using less supervision. Additionally we perform reconstructions in challenging new scenarios where state-of-the-art approaches break down and where our method improves performance by up to an order of magnitude.
机译:在这项工作中,我们描述了一种从运动的在线密集非刚性结构的新方法。该问题是重构,将思想从视觉对象跟踪结合,提供更通用和统一的技术,重构和点跟踪算法之间的反馈。结果算法克服了许多传统技术的局限性,例如对参考图像/模板或预先计算的轨迹的需要。该技术也可以应用于传统上具有挑战性的场景,例如建模具有强自闭锁的对象或从极端的观点范围内建模。所提出的算法不需要离线预学习,并且不假设建模对象在视频序列的开头保持刚性。我们的实验表明,在传统的情景中,所提出的方法可以在使用较少监督的同时达到比现有技术的最新状态更好。此外,我们在具有挑战性的新情景中执行重建,其中最先进的方法分解,我们的方法通过最多提高性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号