【24h】

Fast and accurate global motion compensation

机译:快速准确的全局运动补偿

获取原文
获取原文并翻译 | 示例
           

摘要

Video understanding has attracted significant research attention in recent years, motivated by interest in video surveillance, rich media retrieval and vision-based gesture interfaces. Typical methods focus on analyzing both the appearance and motion of objects in video. However, the apparent motion induced by a moving camera can dominate the observed motion, requiring sophisticated methods for compensating for camera motion without a priori knowledge of scene characteristics. This paper introduces two new methods for global motion compensation that are both significantly faster and more accurate than state of the art approaches. The first employs RANSAC to robustly estimate global scene motion even when the scene contains significant object motion. Unlike typical RANSAC-based motion estimation work, we apply RANSAC not to the motion of tracked features but rather to a number of segments of image projections. The key insight of the second method involves reliably classifying salient points into foreground and background, based upon the entropy of a motion inconsistency measure. Extensive experiments on established datasets demonstrate that the second approach is able to remove camera-based observed motion almost completely while still preserving foreground motion.
机译:近年来,由于对视频监视,富媒体检索和基于视觉的手势界面的兴趣,对视频的理解引起了广泛的研究关注。典型的方法集中于分析视频中对象的外观和运动。然而,由运动的相机引起的视在运动可以主导观察到的运动,从而需要复杂的方法来补偿相机的运动而无需先验场景特征。本文介绍了两种新的全局运动补偿方法,它们比现有方法快得多且更准确。第一种方法使用RANSAC鲁棒地估计全局场景运动,即使场景中包含重要的对象运动也是如此。与典型的基于RANSAC的运动估计工作不同,我们不将RANSAC应用于跟踪特征的运动,而是应用于许多图像投影段。第二种方法的关键见解包括基于运动不一致度量的熵将显着点可靠地分类为前景和背景。对已建立的数据集进行的大量实验表明,第二种方法能够几乎完全去除基于摄像机的观察到的运动,同时仍保留前景运动。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号