首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >GraNet: Global relation-aware attentional network for semantic segmentation of ALS point clouds
【24h】

GraNet: Global relation-aware attentional network for semantic segmentation of ALS point clouds

机译:花岗岩:全局关系感知注意网络的语义分割ALS点云

获取原文
获取原文并翻译 | 示例

摘要

Semantic labeling is an essential but challenging task when interpreting point clouds of 3D scenes. As a core step for scene interpretation, semantic labeling is the task of annotating every point in the point cloud with a label of semantic meaning, which plays a significant role in plenty of point cloud related applications. For airborne laser scanning (ALS) point clouds, precise annotations can considerably broaden its use in various applications. However, accurate and efficient semantic labeling is still a challenging task, due to the sensor noise, complex object structures, incomplete data, and uneven point densities. In this work, we propose a novel neural network focusing on semantic labeling of ALS point clouds, which investigates the importance of long-range spatial and channel-wise relations and is termed as global relation-aware attentional network (GraNet). GraNet first learns local geometric description and local dependencies using a local spatial discrepancy attention convolution module (LoSDA). In LoSDA, the orientation information, spatial distribution, and elevation information are fully considered by stacking several local spatial geometric learning modules and the local dependencies are learned by using an attention pooling module. Then, a global relation-aware attention module (GRA), consisting of a spatial relation-aware attention module (SRA) and a channel relation-aware attention module (CRA), is presented to further learn attentions from the structural information of a global scope from the relations and enhance high-level features with the long-range dependencies. The aforementioned two important modules are aggregated in the multi-scale network architecture to further consider scale changes in large urban areas. We conducted comprehensive experiments on three ALS point cloud datasets to evaluate the performance of our proposed framework. The results show that our method can achieve higher classification accuracy compared with other commonly used advanced classification methods. For the ISPRS benchmark dataset, our method improves the overall accuracy (OA) to 84.5 % and the average F-1 measure (AvgF(1)) to 73.6 %, which outperforms other baselines. Besides, experiments were conducted using a new ALS point cloud dataset covering highly dense urban areas and a newly published large-scale dataset.
机译:在解释3D场景的点云时,语义标签是一个必不可少的但具有挑战性的任务。作为场景解释的核心步骤,语义标记是用语义含义的标签注释点云中的每个点的任务,这在大量点云相关应用中起着重要作用。对于空中激光扫描(ALS)点云,精确的注释可以大大拓宽其在各种应用中的使用。然而,由于传感器噪声,复杂的物体结构,不完整的数据和不均匀的点密度,准确和高效的语义标记仍然是一个具有挑战性的任务。在这项工作中,我们提出了一种小说新的神经网络,其专注于ALS点云的语义标记,这调查了远程空间和渠道关系的重要性,并被称为全球关系感知注意网络(颗粒)。格子首先使用局部空间差异注意卷积模块(LOSDA)了解本地几何描述和本地依赖项。在LOSDA中,通过堆叠几个局部空间几何学习模块来完全考虑方向信息,空间分布和高程信息,并且通过使用注意池模块来学习本地依赖性。然后,提供由空间关系感知注意模块(SRA)和频道关系感知注意模块(CRA)组成的全局关系感知注意模块(GRA),以进一步了解来自全局结构信息的进一步学习关注来自关系的范围,增强了远程依赖性的高级功能。上述两个重要模块在多尺度网络架构中聚合,以进一步考虑大城区的规模变化。我们对三个ALS点云数据集进行了全面的实验,以评估我们提出的框架的表现。结果表明,与其他常用的高级分类方法相比,我们的方法可以实现更高的分类准确性。对于ISPRS基准数据集,我们的方法将整体精度(OA)提高到84.5%,平均F-1测量(AVGF(1))至73.6%,这占了其他基线。此外,使用覆盖高度密集的城市地区的新ALS点云数据集和新出版的大型数据集进行了实验。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号