...
首页> 外文期刊>Journal of optical technology >Real-time DeepLabv3+for pedestrian segmentation
【24h】

Real-time DeepLabv3+for pedestrian segmentation

机译:实时 DeepLabv3+ 用于行人分割

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we propose a real-time pedestrian segmentation method that is built on the structure of the semantic segmentation method DeepLabv3+. We design a shallow network as the backbone of DeepLabv3+, and also a new convolution block is proposed to fuse multilevel and multitype features. We first train our DeepLabv3+ on the Cityscapes dataset to segment objects into 19 classes, and then we fine tune it with persons and riders in Cityscapes and COCO as the foreground and the other classes as the background to get our pedestrian segmentation model. The experimental results show that our DeepLabv3+ can achieve an 89.0 mean intersection-over-union pedestrian segmentation accuracy on the Cityscapes validation set. Our method also reaches a speed of 33 frames per second on images with a resolution of 720 x 1280 with a GTX 1080Ti graphics processing unit. Experimental results prove that our method can be applied to various scenes with fast speed. (c) 2019 Optical Society of America
机译:在本文中,我们提出了一种基于语义分割方法DeepLabv3+结构的实时行人分割方法。我们设计了一个浅层网络作为DeepLabv3+的骨干,并提出了一种新的卷积模块来融合多级和多类型的特征。我们首先在 Cityscapes 数据集上训练我们的 DeepLabv3+,将对象分割成 19 个类,然后我们以 Cityscapes 和 COCO 中的人物和骑手为前景,以其他类为背景对其进行微调,以获得我们的行人分割模型。实验结果表明,DeepLabv3+在Cityscapes验证集上的平均交叉路口与并集行人分割准确率均为89.0%。我们的方法还可以在分辨率为 720 x 1280 的图像上达到每秒 33 帧的速度,并使用 GTX 1080Ti 图形处理单元。实验结果证明,该方法可以快速应用于各种场景。(c) 2019年美国光学学会

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号