首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >BSUV-Net: A Fully-Convolutional Neural Network for Background Subtraction of Unseen Videos
【24h】

BSUV-Net: A Fully-Convolutional Neural Network for Background Subtraction of Unseen Videos

机译:BSUV-Net:完全卷积神经网络,用于隐藏未见视频的背景

获取原文

摘要

Background subtraction is a basic task in computer vision and video processing often applied as a pre-processing step for object tracking, people recognition, etc. Recently, a number of successful background-subtraction algorithms have been proposed, however nearly all of the top-performing ones are supervised. Crucially, their success relies upon the availability of some annotated frames of the test video during training. Consequently, their performance on completely "unseen" videos is undocumented in the literature. In this work, we propose a new, supervised, background-subtraction algorithm for unseen videos (BSUV-Net) based on a fully-convolutional neural network. The input to our network consists of the current frame and two background frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance of overfitting, we also introduce a new data-augmentation technique which mitigates the impact of illumination difference between the background frames and the current frame. On the CDNet-2014 dataset, BSUV-Net outperforms stateof-the-art algorithms evaluated on unseen videos in terms of several metrics including F-measure, recall and precision.
机译:背景扣除是计算机视觉和视频处理中的一项基本任务,通常被用作对象跟踪,人识别等的预处理步骤。最近,已经提出了许多成功的背景扣除算法,但是几乎所有的顶级算法表演者受到监督。至关重要的是,它们的成功取决于培训期间测试视频的某些带注释的帧的可用性。因此,它们在完全“看不见”的视频上的表现在文献中没有记载。在这项工作中,我们提出了一种基于全卷积神经网络的新的,有监督的,用于看不见视频的背景减除算法(BSUV-Net)。我们网络的输入包括当前帧和在不同时间尺度捕获的两个背景帧及其语义分割图。为了减少过度拟合的机会,我们还引入了一种新的数据增强技术,该技术可减轻背景帧和当前帧之间的光照差异的影响。在CDNet-2014数据集上,BSUV-Net在包括F测度,查全率和准确性在内的多个指标方面,优于在看不见的视频上评估的最新算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号