首页> 外文学位 >Online non-rigid motion and scene layer segmentation.
【24h】

Online non-rigid motion and scene layer segmentation.

机译:在线非刚性运动和场景层分割。

获取原文
获取原文并翻译 | 示例

摘要

In the past, different kinds of methods were devised to detect objects from videos. Based on the assumption of stationary camera, the now ubiquitous background subtraction learns the appearance of the background and then subtracts it to segment the scene. In practice such assumption is highly restrictive, and to handle moving cameras other methods were devised. For instance, motion segmentation targets the segmentation of different rigid motions in the video, while scene layer segmentation attempts to find a segmentation of the scene into layers that are consistent in space and time. Yet, such methods still suffers from other limitations such as the requirement of point trajectories to span the entire frame sequence. On a different aspect, recent years have witnessed a large increase in the proportion of videos coming from streaming sources such as TV Broadcast, Internet video streaming, and streaming from mobile devices. Unfortunately, most methods that process videos are mainly offline and with a high computational complexity. Thus rendering them ineffective for processing videos from streaming sources. This highlights the need for novel techniques that are online and efficient at the same time. In this dissertation, we first generalize motion segmentation by showing that under a general perspective camera trajectories belonging to one moving object form a low-dimensional manifold. Based on this, we devise two methods for online nonrigid motion segmentation. The first method tries to explicitly reconstruct the low-dimensional manifolds and then cluster them. The second method attempts to directly separate the manifolds. We then show how motion segmentation and scene layer segmentation can be combined in a single online framework that combines the strength of both approaches. Finally, we propose two methods that assign figure- ground labels to layers by combining several cues. Results show that our framework is effective in detecting moving objects from videos captured by a moving camera.
机译:过去,设计了各种方法来检测视频中的对象。基于固定相机的假设,现在无处不在的背景减法学习背景的外观,然后减去背景以分割场景。在实践中,这种假设具有很高的局限性,并且为处理移动摄像机设计了其他方法。例如,运动分割以视频中不同刚性运动的分割为目标,而场景层分割则试图将场景分割成在空间和时间上一致的图层。然而,这种方法仍然遭受其他限制,例如要求点轨迹跨越整个帧序列。另一方面,近年来,来自诸如电视广播,互联网视频流以及来自移动设备的流之类的流源的视频比例大大增加。不幸的是,大多数处理视频的方法主要是离线的,并且具有很高的计算复杂度。因此,它们对于处理来自流媒体的视频无效。这突显了对同时在线有效的新颖技术的需求。在本文中,我们首先展示了运动分割的一般情况,即表明在一般的视角下,属于一个运动物体的摄像机轨迹形成了低维流形。基于此,我们设计了两种在线非刚性运动分割方法。第一种方法试图显式地重建低维流形,然后对其进行聚类。第二种方法试图直接分离歧管。然后,我们展示如何将运动分割和场景层分割结合在一个结合了两种方法的优势的单一在线框架中。最后,我们提出了两种方法,它们通过组合多个线索来将图形背景标签分配给图层。结果表明,我们的框架可有效地从移动摄像机捕获的视频中检测移动对象。

著录项

  • 作者

    Elqursh, Ali E.;

  • 作者单位

    Rutgers The State University of New Jersey - New Brunswick.;

  • 授予单位 Rutgers The State University of New Jersey - New Brunswick.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2014
  • 页码 90 p.
  • 总页数 90
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号