首页> 外文期刊>ISPRS journal of photogrammetry and remote sensing >A multi-modal garden dataset and hybrid 3D dense reconstruction framework based on panoramic stereo images for a trimming robot
【24h】

A multi-modal garden dataset and hybrid 3D dense reconstruction framework based on panoramic stereo images for a trimming robot

机译:A multi-modal garden dataset and hybrid 3D dense reconstruction framework based on panoramic stereo images for a trimming robot

获取原文
获取原文并翻译 | 示例
       

摘要

? 2023 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)Recovering an outdoor environment's surface mesh is vital for an agricultural robot during task planning and remote visualization. Image-based dense 3D reconstruction is sensitive to large movements between adjacent frames and the quality of the estimated depth maps. Our proposed solution for these problems is based on a newly-designed panoramic stereo camera along with a hybrid novel software framework that consists of three fusion modules: disparity fusion, pose fusion, and volumetric fusion. The panoramic stereo camera with a pentagon shape consists of 5 stereo vision camera pairs to stream synchronized panoramic stereo images for the following three fusion modules. In the disparity fusion module, rectified stereo images produce the initial disparity maps using multiple stereo vision algorithms. Then, these initial disparity maps, along with the intensity images, are input into a disparity fusion network to produce refined disparity maps. Next, the refined disparity maps are converted into full-view (360°) point clouds or single-view (72°) point clouds for the pose fusion module. The pose fusion module adopts a two-stage global-coarse-to-local-fine strategy. In the first stage, each pair of full-view point clouds is registered by a global point cloud matching algorithm to estimate the transformation for a global pose graph's edge, which effectively implements loop closure. In the second stage, a local point cloud matching algorithm is used to match single-view point clouds in different nodes. Next, we locally refine the poses of all corresponding edges in the global pose graph using three proposed rules, thus constructing a refined pose graph. The refined pose graph is optimized to produce a global pose trajectory for volumetric fusion. In the volumetric fusion module, the global poses of all the nodes are used to integrate the single-view point clouds into the volume to produce the mesh of the whole garden. The proposed framework and its three fusion modules are tested on a real outdoor garden dataset to show the superiority of the performance. The whole pipeline takes about 4 min on a desktop computer to process the real garden dataset, which is available at: https://github.com/Canpu999/Trimbot-Wageningen-SLAM-Dataset.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号