首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >Robust deep alignment network with remote sensing knowledge graph for zero-shot and generalized zero-shot remote sensing image scene classification
【24h】

Robust deep alignment network with remote sensing knowledge graph for zero-shot and generalized zero-shot remote sensing image scene classification

机译:具有遥感知识图形的强大的深度对准网络,用于零射击和广义零射遥感图像场景分类

获取原文
获取原文并翻译 | 示例
       

摘要

Although deep learning has revolutionized remote sensing (RS) image scene classification, current deep learningbased approaches highly depend on the massive supervision of predetermined scene categories and have disappointingly poor performance on new categories that go beyond predetermined scene categories. In reality, the classification task often has to be extended along with the emergence of new applications that inevitably involve new categories of RS image scenes, so how to make the deep learning model own the inference ability to recognize the RS image scenes from unseen categories, which do not overlap the predetermined scene categories in the training stage, becomes incredibly important. By fully exploiting the RS domain characteristics, this paper constructs a new remote sensing knowledge graph (RSKG) from scratch to support the inference recognition of unseen RS image scenes. To improve the semantic representation ability of RS-oriented scene categories, this paper proposes to generate a Semantic Representation of scene categories by representation learning of RSKG (SR-RSKG). To pursue robust cross-modal matching between visual features and semantic representations, this paper proposes a novel deep alignment network (DAN) with a series of well-designed optimization constraints, which can simultaneously address zero-shot and generalized zero-shot RS image scene classification. Extensive experiments on one merged RS image scene dataset, which is the integration of multiple publicly open datasets, show that the recommended SR-RSKG obviously outperforms the traditional knowledge types (e.g., natural language processing models and manually annotated attribute vectors), and our proposed DAN shows better performance compared with the state-of-the-art methods under both the zero-shot and generalized zero-shot RS image scene classification settings. The constructed RSKG will be made publicly available along with this paper (https://github.com/kdy2021/SR-RSKG).
机译:虽然深度学习已经彻底改变了遥感(RS)图像场景分类,但目前的深度学习方法高度依赖于对预定场景类别的大规模监督,并且在超出预定场景类别的新类别对性能令人失望。实际上,分类任务通常必须随着新应用程序的出现而不可避免地涉及新类别的RS图像场景,因此如何使深度学习模型拥有从未申报类别识别RS图像场景的推理能力,在培训阶段中不重叠预定的场景类别,变得非常重要。通过完全利用RS域特征,本文从头开始构建一个新的遥感知识图(RSKG),以支持未接受UNESEN RS图像场景的推理识别。为了提高RS导向的场景类别的语义表示能力,本文提出通过RSKG(SR-RSKG)的表示学习来生成场景类别的语义表示。为了在视觉特征和语义表示之间进行强大的跨模型匹配,提出了一种新的深度对准网络(DAN),具有一系列精心设计的优化约束,可以同时解决零射和广义零射RS图像场景分类。在一个合并的RS图像场景数据集上进行了广泛的实验,这是多个公开开放数据集的集成,表明推荐的SR-RSKG明显优于传统知识类型(例如,自然语言处理模型和手动注释的属性向量),以及我们提出的与零射击和广义零拍摄RS图像场景分类设置相比,丹表现出更好的性能。建设的RSKG将与本文公开(https://github.com/kdy2021/sr -rskg)一起进行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号