首页> 外文会议>SAE Intelligent and Connected Vehicles Symposium >Edge Enhanced Traffic Scene Segmentation Algorithm with Deep Neural Network
【24h】

Edge Enhanced Traffic Scene Segmentation Algorithm with Deep Neural Network

机译:深神经网络的边缘增强交通场景分割算法

获取原文

摘要

Image segmentation is critical in autonomous driving field. It can reveal essential clues such as objects’ shape or boundary information. The information, moreover, can be leveraged as input information of other tasks: vehicle detection, for example, or vehicle trajectory prediction. SegNet, one deep learning based segmentation model proposed by Cambridge, has been a public baseline for scene perception tasks. It, however, suffers an accuracy deficiency in objects marginal area. Segmentation of this area is very challenging with current models. To alleviate the problem, in this paper, we propose one edge enhanced deep learning based model. Specifically, we first introduced one simple, yet effective Artificial Interfering Mechanism (AIM) which feeds segmentation model manual extracted key features. We argue this mechanism possesses the ability to enhance essential features extraction and hence, ameliorate the model performance. Other modifications of model structures were also designed for further improving model’s feature extraction ability. Besides, one Pixel Alignment Unit (PAU) was presented for pixel level alignment. The unit is designed based on Bidirectional Long Short Term Memory (Bi-LSTM) unit and, according on our design, is able to reconstruct and extract pixel spatial features which is a key clue for the segmentation. Combined with mentioned methods, in the end, an integrated model was proposed. To evaluate our model, CamVid dataset were adopted in experiments. The experiment result showed that our model has the ability to refine objects margin area segmentation results. Our contribution lies in that we attempt to boost model performance through artificially interfered model feature extraction phases and attempt to adopt the Bi-LSTM structure to reconstruct and extract pixels’ spatial features.
机译:图像分割对于自动驾驶场至关重要。它可以揭示必要的线索,例如物体的形状或边界信息。此外,该信息可以被利用作为其他任务的输入信息:例如,车辆检测或车辆轨迹预测。 SEGNET是剑桥提出的一个基于深度学习的分段模型,是场景感知任务的公共基线。然而,它遭受了物体边际区域的准确性缺陷。该区域的分割与当前模型非常具有挑战性。为了缓解问题,在本文中,我们提出了一个优势基于深度学习的模型。具体而言,我们首先介绍了一种简单但有效的人工干扰机制(AIM),其供给分割模型手册提取的关键特征。我们认为这种机制具有增强基本特征提取的能力,因此改善了模型性能。模型结构的其他修改也被设计用于进一步提高模型的特征提取能力。此外,呈现了一个像素对准单元(PAU)以用于像素电平对准。该装置是基于双向长期内存(Bi-LSTM)单元的设计,并且根据我们的设计,能够重建和提取像素空间特征,该特征是分割的关键线索。结合提到的方法,最后提出了集成模型。为了评估我们的模型,在实验中采用Camvid数据集。实验结果表明,我们的模型具有改进物体边缘区分割结果的能力。我们的贡献在于,我们试图通过人工干扰模型提取模型性能,并尝试采用BI-LSTM结构来重建和提取像素的空间特征。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号