首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Spatio-Temporal Attention Model for Foreground Detection in Cross-Scene Surveillance Videos
【2h】

Spatio-Temporal Attention Model for Foreground Detection in Cross-Scene Surveillance Videos

机译:跨场景监控视频中前景检测的时空注意模型

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Foreground detection is an important theme in video surveillance. Conventional background modeling approaches build sophisticated temporal statistical model to detect foreground based on low-level features, while modern semantic/instance segmentation approaches generate high-level foreground annotation, but ignore the temporal relevance among consecutive frames. In this paper, we propose a Spatio-Temporal Attention Model (STAM) for cross-scene foreground detection. To fill the semantic gap between low and high level features, appearance and optical flow features are synthesized by attention modules via the feature learning procedure. Experimental results on CDnet 2014 benchmarks validate it and outperformed many state-of-the-art methods in seven evaluation metrics. With the attention modules and optical flow, its F-measure increased and respectively. The model without any tuning showed its cross-scene generalization on Wallflower and PETS datasets. The processing speed was 10.8 fps with the frame size 256 by 256.
机译:前景检测是视频监控中的重要主题。常规的背景建模方法建立了复杂的时间统计模型,以基于低级特征检测前景,而现代语义/实例分割方法会生成高级前景注释,但忽略连续帧之间的时间相关性。在本文中,我们提出了一种用于跨场景前景检测的时空注意力模型(STAM)。为了填补低级和高级特征之间的语义鸿沟,注意力模块通过特征学习过程来合成外观和光流特征。 CDnet 2014基准测试的实验结果证实了这一点,并且在七个评估指标中均胜过许多最新方法。通过注意模块和光流,其F值分别增加。无需任何调整的模型在Wallflower和PETS数据集上显示了其跨场景的概括。处理速度为10.8 fps,帧大小为256 x 256。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号