首页> 外文学位 >Semi-automated DIRSIG scene modeling from three-dimensional lidar and passive imagery.
【24h】

Semi-automated DIRSIG scene modeling from three-dimensional lidar and passive imagery.

机译:从三维激光雷达和被动图像进行半自动DIRSIG场景建模。

获取原文
获取原文并翻译 | 示例

摘要

The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG's ability to generate scenes "on demand." To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually.;To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery.;Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery.;These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used include multiple-return point information provided by an Optech lidar linescanning sensor, multispectral frame array imagery from the Wildfire Airborne Sensor Program (WASP) and WASP-lite sensors, and hyperspectral data from the Modular Imaging Spectrometer Instrument (MISI) and the COMPact Airborne Spectral Sensor (COMPASS). Information from these image sources was fused and processed using the semi-automated approach to provide the DIRSIG input files used to define a synthetic scene. When compared to the standard manual process for creating these files, we achieved approximately a tenfold increase in speed, as well as a significant increase in geometric accuracy.
机译:数字成像和遥感图像生成(DIRSIG)模型是一种建立的,基于第一原理的场景模拟工具,可生成从可见光到长波红外(0.4至20微米)的合成多光谱和高光谱图像。在过去的几年中,软件中还集成了诸如光谱极化和有源光检测与测距(lidar)模型之类的显着增强功能,为多传感器算法测试和传感器评估提供了非常强大的工具。但是,创建大型场景所需的大量时间限制了DIRSIG生成“按需”场景的能力。迄今为止,场景生成一直是一个费时,费力的过程,因为必须手动创建和归因于地形模型,CAD对象和背景图。;为缩短此过程所需的时间,本研究开发了一种方法来减少合成场景构建的多个方面的“半人制”要求。通过将3D激光雷达数据与被动图像融合,我们能够半自动化DIRSIG场景创建过程中的一些必需任务。此外,通过使用多模式图像,许多其他任务实现了缩短的实施时间。激光雷达数据被用于识别地面和物体特征,以及定义初始树的位置和建筑物参数估计。然后,通过使用投影几何的概念来分析高分辨率帧阵列图像来代替大多数传统摄影测量参考中更常见的欧几里得方法,可以对这些估计值进行细化。光谱图像还用于将材料特征分配给建模的几何对象。这是通过对原始高光谱图像应用修改后的大气补偿来实现的。这些技术已成功应用于RIT校园和更大的罗切斯特地区收集的图像。所使用的数据包括Optech激光雷达线扫描传感器提供的多返回点信息,Wildfire机载传感器计划(WASP)和WASP-lite传感器提供的多光谱帧阵列图像以及模块化成像光谱仪(MISI)和紧凑型机载光谱传感器(COMPASS)。使用半自动化方法融合和处理了来自这些图像源的信息,以提供用于定义合成场景的DIRSIG输入文件。与创建这些文件的标准手动过程相比,我们的速度提高了大约十倍,而且几何精度也大大提高了。

著录项

  • 作者

    Lach, Stephen R.;

  • 作者单位

    Rochester Institute of Technology.;

  • 授予单位 Rochester Institute of Technology.;
  • 学科 Remote Sensing.
  • 学位 Ph.D.
  • 年度 2008
  • 页码 267 p.
  • 总页数 267
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 公共建筑;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号