首页> 外文会议>Conference on algorithms for synthetic aperture radar imagery >MSTAR's extensible search engine and model-based inferencing toolkit
【24h】

MSTAR's extensible search engine and model-based inferencing toolkit

机译:MSTAR的可扩展搜索引擎和基于模型的推理工具包

获取原文

摘要

The DARPA/AFRL 'Moving and Stationary Target Acquisition and Recognition' (MSTAR) program is developing a model-based vision approach to Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR). The motivation for this work is to develop a high performance ATR capability that can identify ground targets in highly unconstrained imaging scenarios that include variable image acquisition geometry, arbitrary target pose and configuration state, differences in target deployment situation, and strong intra-class variations. The MSTAR approach utilizes radar scattering models in an on-line hypothesize-and-test operation that compares predicted target signature statistics with features extracted from image data in an attempt to determine a 'best fit' explanation of the observed image. Central to this processing paradigm is the Search algorithm, which provides intelligent control in selecting features to measure and hypotheses to test, as well as in making the decision about when to stop processing and report a specific target type or clutter. Intelligent management of computation performed by the Search module is a key enabler to scaling the model-based approach to the large hypothesis spaces typical of realistic ATR problems. In this paper, we describe the present state of design and implementation of the MSTAR Search engine, as it has matured over the last three years of the MSTAR program. The evolution has been driven by a continually expanding problem domain that now includes 30 target types, viewed under arbitrary squint/depression, with articulations, reconfigurations, revetments, variable background, and up to 30% blocking occlusion. We believe that the research directions that have been inspired by MSTAR's challenging problem domain are leading to broadly applicable search methodologies that are relevant to computer vision systems in many areas.
机译:DARPA / AFRL'移动和静止目标采集和识别'(MSTAR)计划正在开发基于模型的视觉方法,用于合成孔径雷达(SAR)自动目标识别(ATR)。这项工作的动机是开发高性能的ATR能力,可以在高度无规定的成像场景中识别地面目标,包括可变图像采集几何,任意目标姿势和配置状态,目标部署情况的差异以及强大的类内变型。 MSTAR方法利用在线假设和测试操作中的雷达散射模型,该测试操作将预测的目标签名统计数据与从图像数据中提取的特征进行比较,以确定观察图像的“最佳拟合”解释。此处理范例的核心是搜索算法,它提供智能控制在选择要测量和假设以进行测试的功能时,以及何时停止处理和报告特定目标类型或杂乱的决定。搜索模块执行的计算智能管理是一个关键的推动器,可以将基于模型的方法缩放到典型的逼真的ATR问题的大假设空间。在本文中,我们描述了MSTAR搜索引擎的设计和实现状态,因为它在MSTAR计划的最后三年内已经成熟。在不断扩展的问题域中,演变已经推动,现在包括30种目标类型,在任意斜视/抑制下观看,具有关节,重新配置,修饰,可变背景,以及高达30%的阻塞遮挡。我们认为,由MSTAR挑战性问题域的启发的研究方向是广泛适用的搜索方法,这些方法与许多领域的计算机视觉系统相关。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号