首页> 外文会议>International Conference on Computer Vision >Weakly Aligned Cross-Modal Learning for Multispectral Pedestrian Detection
【24h】

Weakly Aligned Cross-Modal Learning for Multispectral Pedestrian Detection

机译:弱对齐的跨模态学习用于多光谱行人检测

获取原文

摘要

Multispectral pedestrian detection has shown great advantages under poor illumination conditions, since the thermal modality provides complementary information for the color image. However, real multispectral data suffers from the position shift problem, i.e. the color-thermal image pairs are not strictly aligned, making one object has different positions in different modalities. In deep learning based methods, this problem makes it difficult to fuse the feature maps from both modalities and puzzles the CNN training. In this paper, we propose a novel Aligned Region CNN (AR-CNN) to handle the weakly aligned multispectral data in an end-to-end way. Firstly, we design a Region Feature Alignment (RFA) module to capture the position shift and adaptively align the region features of the two modalities. Secondly, we present a new multimodal fusion method, which performs feature re-weighting to select more reliable features and suppress the useless ones. Besides, we propose a novel RoI jitter strategy to improve the robustness to unexpected shift patterns of different devices and system settings. Finally, since our method depends on a new kind of labelling: bounding boxes that match each modality, we manually relabel the KAIST dataset by locating bounding boxes in both modalities and building their relationships, providing a new KAIST-Paired Annotation. Extensive experimental validations on existing datasets are performed, demonstrating the effectiveness and robustness of the proposed method. Code and data are available at https://github.com/luzhang16/AR-CNN.
机译:由于热模态为彩色图像提供了补充信息,因此多光谱行人检测在恶劣的照明条件下显示出巨大的优势。但是,实际的多光谱数据存在位置偏移的问题,即,色热图像对没有严格对准,从而使一个物体在不同的模态下具有不同的位置。在基于深度学习的方法中,此问题使得难以融合两种方式的特征图,也使CNN训练感到困惑。在本文中,我们提出了一种新颖的对准区域CNN(AR-CNN),以端到端的方式处理弱对准的多光谱数据。首先,我们设计了一个区域特征对齐(RFA)模块来捕获位置偏移并自适应地对齐这两种模态的区域特征。其次,我们提出了一种新的多峰融合方法,该方法执行特征重新加权以选择更可靠的特征并抑制无用的特征。此外,我们提出了一种新颖的RoI抖动策略,以提高针对不同设备和系统设置的意外移位模式的鲁棒性。最后,由于我们的方法依赖于一种新的标签:与每种模态匹配的边界框,我们通过在两种模式中定位边界框并建立它们之间的关系来手动重新标记KAIST数据集,从而提供新的KAIST配对注释。对现有数据集进行了广泛的实验验证,证明了该方法的有效性和鲁棒性。代码和数据位于https://github.com/luzhang16/AR-CNN。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号