首页> 外文会议>IEEE International Conference on Multimedia and Expo >Learning sharable models for robust background subtraction
【24h】

Learning sharable models for robust background subtraction

机译:学习可共享的模型以实现可靠的背景扣除

获取原文

摘要

Background modeling and subtraction is a classical topic in compute vision. Gaussian mixture modeling (GMM) is a popular choice for its capability of adaptation to background variations. Lots of improvements have been made to enhance the robustness by considering spatial consistency and temporal correlation. In this paper, we propose a sharable GMM based background subtraction approach. Firstly, a sharable mechanism is presented to model the many-to-one relationship between pixels and models. Each pixel dynamically searches the best matched model in the neighborhood. This kind of space-sharing way is robust to camera jitter, dynamic background, etc. Secondly, the sharable models are built for both background and foreground. The noises resulted by local small movements could be effectively eliminated through the background sharable models, while the integrity of moving objects is enhanced by the foreground sharable models, especially for small objects. Finally, each sharable model is updated through randomly selecting a pixel which matches this model. And a flexible mechanism is added for switching between background and foreground models. Experiments on ChangeDetection benchmark dataset demonstrate the effectiveness of our approach.
机译:背景建模和减法是计算视觉中的经典主题。高斯混合建模(GMM)因其能够适应背景变化而广受欢迎。通过考虑空间一致性和时间相关性,已进行了大量改进以增强鲁棒性。在本文中,我们提出了一种基于GMM的可共享背景扣除方法。首先,提出了一种共享机制来建模像素与模型之间的多对一关系。每个像素动态搜索附近的最佳匹配模型。这种空间共享方式对相机抖动,动态背景等具有鲁棒性。其次,针对背景和前景构建了可共享的模型。通过背景可共享模型可以有效地消除局部小运动引起的噪声,而前景可共享模型可以增强运动对象的完整性,特别是对于小型对象。最后,通过随机选择与该模型匹配的像素来更新每个可共享模型。并且增加了一种灵活的机制,可以在背景模型和前景模型之间进行切换。在ChangeDetection基准数据集上进行的实验证明了我们方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号