首页> 外文会议>Electro-optical remote sensing, photonic technologies, and applications VI >Consistency in Multi-modal Automated Target Detection using Temporally Filtered Reporting
【24h】

Consistency in Multi-modal Automated Target Detection using Temporally Filtered Reporting

机译:使用临时过滤的报告进行多模式自动目标检测的一致性

获取原文
获取原文并翻译 | 示例

摘要

Autonomous target detection is an important goal in the wide-scale deployment of unattended sensor networks. Current approaches are often sample-centric with an emphasis on achieving maximal detection on any given isolated target signature received. This can often lead to both high false alarm rates and the frequent re-reporting of detected targets, given the required trade-off between detection sensitivity and false positive target detection. Here, by assuming that the number of samples on a true target will both be high and temporally consistent we can treat our given detection approach as a ensemble classifier distributed over time with classification from each sample, at each time-step, contributing to an overall detection threshold. Following this approach, we develop a mechanism whereby the temporal consistency of a given target must be statistically strong, over a given temporal window, for an onward detection to be reported.rnIf the sensor sample frequency and throughput is high, relative to target motion through the field of view (e.g. 25fps camera) then we can validly set such a temporal window to a value above the occurrence level of spurious false positive detections. This approach is illustrated using the example of automated real-time vehicle and people detection, in multi-modal visible (EO) and thermal (IR) imagery, deployed on an unattended dual-sensor pod. A sensitive target detection approach, based on a codebook mapping of visual features, classifies target regions initially extracted from the scene using an adaptive background model. The use of temporal filtering provides a consistent, fused onward information feed of targets detected from either or both sensors whilst minimizing the onward transmission of false positive detections and facilitating the use of an otherwise sensitive detection approaches within the robust target reporting context of a deployed sensor network.
机译:自主目标检测是无人值守传感器网络大规模部署中的重要目标。当前的方法通常以样本为中心,着重于在接收到的任何给定的孤立目标特征上实现最大检测。鉴于检测灵敏度和错误阳性目标检测之间需要进行权衡,这通常会导致较高的错误警报率和频繁重新报告检测到的目标。在这里,通过假设真实目标上的样本数量既高又在时间上一致,我们可以将给定的检测方法视为随时间分布的整体分类器,并在每个时间步长对每个样本进行分类,从而对总体检测阈值。按照这种方法,我们开发了一种机制,其中给定目标的时间一致性必须在给定的时间窗口内具有统计上的强才能报告向前检测.rn如果传感器样本的频率和通量较高,则相对于目标运动通过然后,我们可以将这样的时间窗有效地设置在视场(例如25fps摄像机)上,并将其设置为高于虚假误报检测发生水平的值。以部署在无人值守双传感器吊舱上的多模式可见(EO)和热(IR)图像中的自动实时车辆和人员检测为例,说明了这种方法。基于视觉特征的密码本映射的敏感目标检测方法使用自适应背景模型对最初从场景中提取的目标区域进行分类。时间过滤的使用提供了从一个或两个传感器检测到的目标的一致,融合的前进信息馈送,同时最大程度地减少了误报检测的向前传输,并有助于在已部署传感器的可靠目标报告上下文中使用其他敏感检测方法网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号