首页> 外文会议>SPIE Conference on Photonic Applications for Aerospace, Transportation, and Harsh Environments >Content-dependent on-the-fly visual information fusion for battlefield scenarios
【24h】

Content-dependent on-the-fly visual information fusion for battlefield scenarios

机译:对战地方案的内容依赖于无视觉信息融合

获取原文

摘要

We report on cooperative research program between Army Research Laboratory (ARL), Night Vision and Electronic Sensors Directorate (NVESD), and University of Maryland (UMD). The program aims to develop advanced on-the-fly atmospheric image processing techniques based on local information fusion from a single or multiple monochrome and color live video streams captured by imaging sensors in combat or reconnaissance situations. Local information fusion can be based on various local metrics including local image quality, local image-area motion, spatio-temporal characteristics of image content, etc. Tools developed in this program are used to identify and fuse critical information to enhance target identification and situational understanding in conditions of severe atmospheric turbulence.
机译:我们向军队研究实验室(ARL),夜视和电子传感器(NVESD)和马里兰大学(UMD)之间的合作研究计划报告。该计划旨在基于由在战斗或侦察情况下的成像传感器捕获的单个或多个单色和彩色实时视频流的本地信息融合来开发先进的常规大气图像处理技术。本地信息融合可以基于各种本地度量,包括本地图像质量,本地图像面积运动,图像内容的时空特征等。该程序中开发的工具用于识别和融合关键信息以增强目标识别和情况严重大气湍流条件的理解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号