首页> 外文会议>International Conference on Multimedia Modeling >Robust Multispectral Pedestrian Detection via Uncertainty-Aware Cross-Modal Learning
【24h】

Robust Multispectral Pedestrian Detection via Uncertainty-Aware Cross-Modal Learning

机译:通过不确定性感知跨模式学习强大的多光谱行人检测

获取原文
获取外文期刊封面目录资料

摘要

With the development of deep neural networks, multispectral pedestrian detection has been received a great attention by exploiting complementary properties of multiple modalities (e.g., color-visible and thermal modalities). Previous works usually rely on network prediction scores in combining complementary modal information. However, it is widely known that deep neural networks often show the overconfident problem which results in limited performance. In this paper, we propose a novel uncertainty-aware cross-modal learning to alleviate the aforementioned problem in multispectral pedestrian detection. First, we extract object region uncertainty which represents the reliability of object region features in multiple modalities. Then, we combine each modal object region feature considering object region uncertainty. Second, we guide the classifier of detection framework with soft target labels to be aware of the level of object region uncertainty in multiple modalities. To verify the effectiveness of the proposed methods, we conduct extensive experiments with various detection frameworks on two public datasets (i.e., KAIST Multispectral Pedestrian Dataset and CVC-14).
机译:随着深度神经网络的发展,通过利用多种方式的互补性(例如,颜色可见和热成型),多光谱行人检测得到了极大的关注。以前的作品通常依赖于组合互补模态信息的网络预测分数。然而,众所周知,深度神经网络通常显示出的过度自信问题,这导致了有限的性能。在本文中,我们提出了一种新的不确定感知跨模型学习,以减轻多光谱行人检测中的上述问题。首先,我们提取对象区域的不确定度,其表示多种模式中的对象区域特征的可靠性。然后,考虑对象区域不确定性,我们组合每个模态对象区域特征。其次,我们指导具有软目标标签的检测框架的分类器,以了解多种模式中的对象区域不确定性的水平。为了验证所提出的方法的有效性,我们对两个公共数据集(即Kaist MultiSpectral行人数据集和CVC-14)进行了广泛的实验。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号