首页> 外文会议>International Conference on Systems and Informatics >An Improved YOLOV3 for Pedestrian Clothing Detection
【24h】

An Improved YOLOV3 for Pedestrian Clothing Detection

机译:一种改进的YOLOV3,用于行人服装检测

获取原文

摘要

Pedestrian clothing detection, which is dedicated to the detection of pedestrian clothing, is of great significance in pose estimation, pedestrian classification, security and so on. We propose an algorithm named YOLOV3-PCD (An Improved YOLOV3 for Pedestrian Clothing Detection) for pedestrian clothing detection, which finds out the types and positions of clothes on pedestrians. Since there is no available dataset for pedestrian clothing detection task, we build our own dataset in which most objects are large and re-cluster the anchor boxes as well. In potential application fields of pedestrian clothing detection such as pose estimation and security scenarios, targets that need to be detected are usually large. So, we remove the scale used to detect small objects in the original YOLOV3. In addition, we simultaneously consider the detection of the rest medium and maximum scales, and introduce the down-sampling parallel to the original YOLOV3's up-sampling. Based on the above two improvements, we effectively increase the propagation and reuse of features, and improve network performance in the big object detection. In the end, for the application of embedded device in different scenarios, we prune the network to make it fast and small. Experiments show that the mAP of our proposed model reaches 91.99% which is 2% higher than the original YOLOV3 model and the number of parameters reduces to 28.74% of the original YOLOV3 model after pruning.
机译:专门用于行人服装检测的行人服装检测在姿势估计,行人分类,安全性等方面具有重要意义。我们提出了一种名为YOLOV3-PCD(用于行人服装检测的改进型YOLOV3)的算法,用于对行人服装进行检测,从而找出行人衣服的类型和位置。由于没有可用于行人服装检测任务的数据集,因此我们建立了自己的数据集,其中大多数对象较大,并且也重新聚类了锚定框。在行人服装检测的潜在应用领域中,例如姿态估计和安全场景中,需要检测的目标通常很大。因此,我们删除了用于检测原始YOLOV3中小物体的刻度。此外,我们同时考虑了其余中等和最大音阶的检测,并引入了与原始YOLOV3的上采样平行的下采样。基于以上两个改进,我们有效地增加了特征的传播和重用,并在大对象检测中提高了网络性能。最后,针对嵌入式设备在不同场景下的应用,我们修剪网络以使其变得既快又小。实验表明,我们提出的模型的mAP达到91.99%,比原始的YOLOV3模型高出2%,参数数量减少到原始YOLOV3模型的28.74%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号