首页> 外文期刊>Journal of Field Robotics >True Color Correction of Autonomous Underwater Vehicle Imagery
【24h】

True Color Correction of Autonomous Underwater Vehicle Imagery

机译:自主水下航行器图像的真彩色校正

获取原文
获取原文并翻译 | 示例

摘要

This paper presents an automated approach to recovering the true color of objects on the seafloor in images collected from multiple perspectives by an autonomous underwater vehicle (AUV) during the construction of three-dimensional (3D) seafloor models and image mosaics. When capturing images underwater, the water column induces several effects on light that are typically negligible in air, such as color-dependent attenuation and backscatter. AUVs must typically carry artificial lighting when operating at depths below 20-30 m; the lighting pattern generated is usually not spatially consistent. These effects cause problems for human interpretation of images, limit the ability of using color to identify benthic biota or quantify changes over multiple dives, and confound computer-based techniques for clustering and classification. Our approach exploits the 3D structure of the scene generated using structure-from-motion and photogrammetry techniques to provide basic spatial data to an underwater image formation model. Parameters that are dependent on the properties of the water column are estimated from the image data itself, rather than using fixed in situ infrastructure, such as reflectance panels or detailed data on water constitutes. The model accounts for distance-based attenuation and backscatter, camera vignetting and the artificial lighting pattern, recovering measurements of the true color (reflectance) and thus allows us to approximate the appearance of the scene as if imaged in air and illuminated from above. Our method is validated against known color targets using imagery collected in different underwater environments by two AUVs that are routinely used as part of a benthic habitat monitoring program.
机译:本文提出了一种自动方法,该方法可在构建三维(3D)海底模型和图像镶嵌图时,从自动水下航行器(AUV)从多个角度收集的图像中恢复海底物体的真实色彩。当在水下拍摄图像时,水柱会对空气中通常可以忽略的光产生多种影响,例如与颜色有关的衰减和反向散射。在低于20-30 m的深度操作时,AUV通常必须携带人工照明;生成的照明模式通常在空间上不一致。这些影响给人类解释图像带来了问题,限制了使用颜色识别底栖生物群或量化多次潜水的变化的能力,并混淆了基于计算机的聚类和分类技术。我们的方法利用通过运动构造和摄影测量技术生成的场景的3D结构,为水下图像形成模型提供基本的空间数据。取决于水柱属性的参数是从图像数据本身估算的,而不是使用固定的现场基础设施,例如反射板或有关水组成的详细数据。该模型考虑了基于距离的衰减和后向散射,相机渐晕和人造照明图案,恢复了真实色彩(反射率)的测量值,因此使我们能够近似场景的外观,就像在空气中成像并从上方照明一样。我们的方法通过使用两个水下航行器在不同水下环境中收集的图像针对已知的颜色目标进行了验证,这两个水下航行器通常用作底栖生物栖息地监测程序的一部分。

著录项

  • 来源
    《Journal of Field Robotics》 |2016年第6期|853-874|共22页
  • 作者单位

    Australian Centre for Field Robotics, The University of Sydney, NSW, Australia;

    Department of Naval Architecture and Marine Engineering, University of Michigan, Ann Arbor, MI, USA;

    Australian Centre for Field Robotics, The University of Sydney, NSW, Australia;

    Australian Centre for Field Robotics, The University of Sydney, NSW, Australia;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号