首页> 外文期刊>Wireless communications & mobile computing >A Deep Multiscale Fusion Method via Low-Rank Sparse Decomposition for Object Saliency Detection Based on Urban Data in Optical Remote Sensing Images
【24h】

A Deep Multiscale Fusion Method via Low-Rank Sparse Decomposition for Object Saliency Detection Based on Urban Data in Optical Remote Sensing Images

机译:基于光学遥感图像中的城市数据的低级对象显着性检测的低级别稀疏分解的深层多尺度融合方法

获取原文
       

摘要

The urban data provides a wealth of information that can support the life and work for people. In this work, we research the object saliency detection in optical remote sensing images, which is conducive to the interpretation of urban scenes. Saliency detection selects the regions with important information in the remote sensing images, which severely imitates the human visual system. It plays a powerful role in other image processing. It has successfully made great achievements in change detection, object tracking, temperature reversal, and other tasks. The traditional method has some disadvantages such as poor robustness and high computational complexity. Therefore, this paper proposes a deep multiscale fusion method via low-rank sparse decomposition for object saliency detection in optical remote sensing images. First, we execute multiscale segmentation for remote sensing images. Then, we calculate the saliency value, and the proposal region is generated. The superpixel blocks of the remaining proposal regions of the segmentation map are input into the convolutional neural network. By extracting the depth feature, the saliency value is calculated and the proposal regions are updated. The feature transformation matrix is obtained based on the gradient descent method, and the high-level semantic prior knowledge is obtained by using the convolutional neural network. The process is iterated continuously to obtain the saliency map at each scale. The low-rank sparse decomposition of the transformed matrix is carried out by robust principal component analysis. Finally, the weight cellular automata method is utilized to fuse the multiscale saliency graphs and the saliency map calculated according to the sparse noise obtained by decomposition. Meanwhile, the object priors knowledge can filter most of the background information, reduce unnecessary depth feature extraction, and meaningfully improve the saliency detection rate. The experiment results show that the proposed method can effectively improve the detection effect compared to other deep learning methods.
机译:城市数据提供了丰富的信息,可以支持人们的生活和工作。在这项工作中,我们研究了光学遥感图像的对象显着性检测,这有利于城市场景的解释。显着性检测选择具有重要信息中的遥感图像中的重要信息,这严重模仿人类视觉系统。它在其他图像处理中起着强大的作用。它成功地在变更检测,对象跟踪,温度逆转和其他任务中取得了巨大成就。传统方法具有一些缺点,如稳健性差和高计算复杂性。因此,本文通过光遥感图像中的对象显着性检测的低级稀疏分解提出了深度多尺度融合方法。首先,我们为遥感图像执行多尺度分段。然后,我们计算显着性值,并生成提案区域。分割图的剩余提案区域的超像素块被输入到卷积神经网络中。通过提取深度特征,计算显着性值并更新提案区域。基于梯度下降方法获得特征变换矩阵,通过使用卷积神经网络获得高电平语义先验知识。该过程连续迭代,以获得每个规模的显着图。通过稳健的主成分分析进行变换矩阵的低级稀疏分解。最后,利用权重蜂窝自动机方法熔化多尺度显着性图和根据通过分解获得的稀疏噪声计算的显着图。同时,对象前导者知识可以过滤大部分背景信息,减少不必要的深度特征提取,并有意义地提高显着性检测率。实验结果表明,与其他深度学习方法相比,该方法可以有效地提高检测效果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号