首页> 外文期刊>Neurocomputing >Video anomaly detection and localization by local motion based joint video representation and OCELM
【24h】

Video anomaly detection and localization by local motion based joint video representation and OCELM

机译:通过基于局部运动的联合视频表示和OCELM进行视频异常检测和定位

获取原文
获取原文并翻译 | 示例

摘要

AbstractNowadays, human-based video analysis becomes increasingly exhausting due to the ubiquitous use of surveillance cameras and explosive growth of video data. This paper proposes a novel approach to detect and localize video anomalies automatically. For video feature extraction, video volumes are jointly represented by two novel local motion based video descriptors, SL-HOF and ULGP-OF. SL-HOF descriptor captures the spatial distribution information of 3D local regions’ motion in the spatio-temporal cuboid extracted from video, which can implicitly reflect the structural information of foreground and depict foreground motion more precisely than the normal HOF descriptor. To locate the video foreground more accurately, we propose a new Robust PCA based foreground localization scheme. ULGP-OF descriptor, which seamlessly combines the classic 2D texture descriptor LGP and optical flow, is proposed to describe the motion statistics of local region texture in the areas located by the foreground localization scheme. Both SL-HOF and ULGP-OF are shown to be more discriminative than existing video descriptors in anomaly detection. To model features of normal video events, we introduce the newly-emergent one-class Extreme Learning Machine (OCELM) as the data description algorithm. With a tremendous reduction in training time, OCELM can yield comparable or better performance than existing algorithms like the classic OCSVM, which makes our approach easier for model updating and more applicable to fast learning from the rapidly generated surveillance data. The proposed approach is tested on UCSD ped1, ped2 and UMN datasets, and experimental results show that our approach can achieve state-of-the-art results in both video anomaly detection and localization task.
机译: 摘要 如今,由于无处不在使用监控摄像头和视频数据的爆炸性增长,基于人的视频分析变得越来越疲惫。本文提出了一种新颖的自动检测和定位视频异常的方法。对于视频特征提取,视频量由两个新颖的基于局部运动的视频描述符SL-HOF和ULGP-OF共同表示。 SL-HOF描述符捕获从视频中提取的时空长方体中3D局部区域运动的空间分布信息,与普通的HOF描述符相比,它可以隐式地反映前景的结构信息并更精确地描绘前景运动。为了更准确地定位视频前景,我们提出了一种新的基于鲁棒PCA的前景定位方案。提出了将经典2D纹理描述符LGP和光流无缝结合的ULGP-OF描述符,以描述前景定位方案所定位区域中局部纹理的运动统计。在异常检测中,SL-HOF和ULGP-OF都比现有的视频描述符更具区分性。为了对正常视频事件的特征进行建模,我们引入了新兴的一类极限学习机(OCELM)作为数据描述算法。通过大大减少培训时间,OCELM可以提供比现有算法(如经典OCSVM)更相称或更好的性能,这使我们的方法更易于模型更新,更适用于从快速生成的监视数据进行快速学习。将该方法在UCSD ped1,ped2和UMN数据集上进行了测试,实验结果表明,该方法可以在视频异常检测和定位任务中取得最新的成果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号