首页> 外文会议>International Joint Conference on Neural Networks >Two-stage Unsupervised Video Anomaly Detection using Low-rank based Unsupervised One-class Learning with Ridge Regression
【24h】

Two-stage Unsupervised Video Anomaly Detection using Low-rank based Unsupervised One-class Learning with Ridge Regression

机译:使用基于低秩的无监督一类学习和岭回归进行两阶段无监督视频异常检测

获取原文

摘要

Video anomaly detection is a valuable but challenging task, especially in the field of surveillance videos for public safety. Almost all existing methods tackle the problem under the supervised setting and only a few attempts are conducted on the unsupervised learning. To avoid the cost of labeling training videos, this paper proposes to discriminate anomaly by a novel two-stage framework in a fully unsupervised manner. Unlike previous unsupervised approaches using local change detection to discover abnormality, our method enjoys the global information from video context by considering the pair-wise similarity of all video events. In this way, our method formulates video anomaly detection as an extension of unsupervised one-class learning, which has not been explored in the literature of video anomaly detection. Specifically, our method consists of two stages: The first stage of our kernel-based method, named Low-rank based Unsupervised One-class Learning with Ridge Regression (LR-UOCL-RR), reformulates the optimization goal of UOCL with ridge regression to avoid expensive computation, which enables our method to handle massive unlabeled data from videos. In the second stage, the estimated normal video events from the first stage are fed into the one-class support vector machine to refine the profile around normal events and enhance the performance. The experimental results conducted on two challenging video benchmarks indicate that our method is considerably superior, up to 15:7% AUC gain, to the state-of-the-art methods in the unsupervised anomaly detection task and even better than several supervised approaches.
机译:视频异常检测是一个有价值但具有挑战性的任务,特别是在公共安全的监控视频领域。几乎所有现有方法都在监督设置下解决问题,只有几次尝试在无监督的学习中进行。为避免标签培训视频的成本,本文提出以完全无人监督的方式通过新型两级框架歧视异常。与使用本地变更检测发现异常的先前无监督方法不同,我们的方法通过考虑所有视频事件的一对类似的相似性来享受视频上下文的全局信息。通过这种方式,我们的方法将视频异常检测制定为无监督的单级学习的延伸,这在视频异常检测的文献中尚未探讨。具体而言,我们的方法由两个阶段组成:我们基于内核的方法的第一阶段,名为基于低级的无人监督的单级学习与脊回归(LR-Uocl-RR),用脊回归重新装饰UOCL的优化目标避免昂贵的计算,这使我们的方法能够处理来自视频的大规模未标记数据。在第二阶段,来自第一阶段的估计的正常视频事件被馈送到单级支持向量机中以在正常事件周围优化配置文件并增强性能。在两个具有挑战性的视频基准测试中进行的实验结果表明,我们的方法大大优越,高达15:7%的AUC增益,到无监督异常检测任务中的最先进方法,甚至比几种监督方法更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号