...
首页> 外文期刊>IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems >REIN the RobuTS: Robust DNN-Based Image Recognition in Autonomous Driving Systems
【24h】

REIN the RobuTS: Robust DNN-Based Image Recognition in Autonomous Driving Systems

机译:缰绳:在自主驾驶系统中的基于鲁棒的基于DNN的图像识别

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In recent years, the neural network (NN) has shown its great potential in image recognition tasks of autonomous driving systems, such as traffic sign recognition, pedestrian detection, etc. However, theoretically well-trained NNs usually fail their performance when facing real-world scenarios. For example, adverse real-world conditions, e.g., bad weather and lighting conditions, can introduce different physical variations and cause considerable accuracy degradation. As for now, the generalization capability of NNs is still one of the most critical challenges for the autonomous driving system. To facilitate the robust image recognition tasks, in this work, we build the RobuTS dataset: a comprehensive Robust Traffic Sign Recognition dataset, which includes images with different environmental variations, e.g., rain, fog, darkening, and blurring. Then to enhance the NN's generalization capability, we propose two generalization-enhanced training schemes: 1) REIN for robust training without data in adverse scenarios and 2) Self-Teaching (ST) for robust training with unlabeled adverse data. The great advantages of such two training schemes are they are data-free (REIN) and label-free (ST), thus effectively reducing the huge human efforts/cost of on-road driving data collection, as well as the expensive manual data annotation. We conduct extensive experiments to validate our methods' performance on both classification and detection tasks. For classification tasks, our proposed training algorithms could consistently improve model performance by +15%-25% (REIN) and +16%-30% (ST) in all adverse scenarios of our RobuTS datasets. For detection tasks, our ST could also improve the detector's performance by +10.1 mean average precision (mAP) on Foggy-Cityscapes, outperforming previous state-of-the-art works by +2.2 mAP.
机译:近年来,神经网络(NN)在自主驾驶系统的图像识别任务中表达了它的巨大潜力,例如交通标志识别,行人检测等。然而,理论上训练有素的NN通常在面对真实时通常失效它们的性能世界情景。例如,不利的现实世界条件,例如恶劣天气和照明条件,可以引入不同的物理变化并导致相当大的精度下降。至于现在,NNS的泛化能力仍然是自治驾驶系统最关键的挑战之一。为了促进强大的图像识别任务,在这项工作中,我们构建了机器人数据集:一个全面的强大的交通标志识别数据集,其包括具有不同环境变化的图像,例如雨,雾,变暗和模糊。然后提高NN的泛化能力,我们提出了两个泛化增强的培训计划:1)在没有处于不利情景的情况下,无需数据的强大培训和2)自我教学(ST),用于使用未标记的不利数据的强大培训。这两个训练方案的巨大优势是它们是无数据(ReN)和无标签(ST),从而有效地降低了巨大的人力努力/驾驶数据收集成本,以及昂贵的手动数据注释。我们进行广泛的实验,以验证我们对分类和检测任务的性能。对于分类任务,我们所提出的培训算法可以一致地提高模型性能+ 15%-25%(ReN)和+ 16%-30%(ST)的机器人数据集的所有不利情景。对于检测任务,我们的ST还可以通过+2.2映射的+2.2映射优于先前的探测器的平均精度(MAP),以通过+10.1平均精度(地图)提高探测器的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号