首页> 美国政府科技报告 >Feature Extraction and Object Recognition in Multi-Modal Forward Looking Imagery
【24h】

Feature Extraction and Object Recognition in Multi-Modal Forward Looking Imagery

机译:多模态前视图像中的特征提取与目标识别

获取原文

摘要

The U. S. Army Night Vision and Electronic Sensors Directorate (NVESD) recently tested an explosive-hazards detection vehicle that combines a pulsed FLGPR with a visible spectrum color camera. Additionally, NVESD tested a human-in-the-loop multi-camera system with the same goal in mind. It contains wide field-of-view color and infrared cameras as well as zoomable narrow field- of-view versions of those modalities. Even though they are separate vehicles, having information from both systems offers great potential for information fusion. Based on previous work at the University of Missouri, we are not only able to register the UTM based positions of the FLGPR to the color image sequences on the first system, but we can register these locations to corresponding image frames of all sensors on the human-in-the-loop platform. This paper presents our approach to first generate libraries of multi-sensor information across these platforms. Subsequently, research is performed in feature extraction and recognition algorithms based on the multi-sensor signatures. Our goal is to tailor specific algorithms to recognize and eliminate different categories of clutter and to be able to identify particular explosive hazards. We demonstrate our library creation, feature extraction and object recognition results on a large data collection at a US Army test site.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号