首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >ALS Point Cloud Classification by Integrating an Improved Fully Convolutional Network into Transfer Learning with Multi-Scale and Multi-View Deep Features
【2h】

ALS Point Cloud Classification by Integrating an Improved Fully Convolutional Network into Transfer Learning with Multi-Scale and Multi-View Deep Features

机译:ALS点云分类通过将改进的完全卷积网络集成到传输学习以多尺度和多视图深度特征

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Airborne laser scanning (ALS) point cloud has been widely used in various fields, for it can acquire three-dimensional data with a high accuracy on a large scale. However, due to the fact that ALS data are discretely, irregularly distributed and contain noise, it is still a challenge to accurately identify various typical surface objects from 3D point cloud. In recent years, many researchers proved better results in classifying 3D point cloud by using different deep learning methods. However, most of these methods require a large number of training samples and cannot be widely used in complex scenarios. In this paper, we propose an ALS point cloud classification method to integrate an improved fully convolutional network into transfer learning with multi-scale and multi-view deep features. First, the shallow features of the airborne laser scanning point cloud such as height, intensity and change of curvature are extracted to generate feature maps by multi-scale voxel and multi-view projection. Second, these feature maps are fed into the pre-trained DenseNet201 model to derive deep features, which are used as input for a fully convolutional neural network with convolutional and pooling layers. By using this network, the local and global features are integrated to classify the ALS point cloud. Finally, a graph-cuts algorithm considering context information is used to refine the classification results. We tested our method on the semantic 3D labeling dataset of the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that overall accuracy and the average F1 score obtained by the proposed method is 89.84% and 83.62%, respectively, when only 16,000 points of the original data are used for training.
机译:空中激光扫描(ALS)点云已广泛应用于各种领域,因为它可以在大规模上以高精度获取三维数据。然而,由于ALS数据是离散的,不规则地分布并包含噪声,因此仍然是精确识别3D点云的各种典型表面对象的挑战。近年来,许多研究人员通过使用不同的深度学习方法,在分类3D点云中,许多研究人员证明了更好的结果。然而,这些方法中的大多数需要大量的训练样本,并且不能广泛用于复杂的场景。在本文中,我们提出了一种ALS点云分类方法,将改进的完全卷积网络集成到传输学习中,以多尺度和多视图深度特征。首先,提取诸如高度,强度和曲率变化的空气激光扫描点云的浅特征以通过多尺寸体素和多视图投影产生特征图。其次,这些特征映射被馈送到预先训练的DenSenet201模型中以导出深度特征,其用作具有卷积和池层的全卷积神经网络的输入。通过使用此网络,集成了本地和全局功能以对ALS点云进行分类。最后,考虑上下文信息的图形切割算法用于优化分类结果。我们在摄影测量和遥感协会(ISPRS)的国际社会的语义3D标签数据集上测试了我们的方法。实验结果表明,当仅使用16,000点的原始数据用于培训时,所提出的方法获得的总体精度和平均F1分数分别为89.84%和83.62%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号