首页> 外文期刊>IEEE Transactions on Circuits and Systems for Video Technology >Deeply Associative Two-Stage Representations Learning Based on Labels Interval Extension Loss and Group Loss for Person Re-Identification
【24h】

Deeply Associative Two-Stage Representations Learning Based on Labels Interval Extension Loss and Group Loss for Person Re-Identification

机译:基于标签间隔扩展损失和人员重新识别的群体损失深度联想的两级表达

获取原文
获取原文并翻译 | 示例

摘要

Person Re-identification (ReID) aims to match people across non-overlapping camera views in a public space, which is usually regarded as an image retrieval problem to match query images with pedestrian images in the gallery. It is challenging since many difficulties exist such as pose misalignments, occlusions, similar appearance when detecting people. Existing researches on ReID mainly focus on two major problems: representation learning and metric learning. In this paper, we target at learning discriminative representations and make two contributions in total. (i) We propose a novel architecture named Deeply Associative Two-stage Representations Learning (DATRL). It contains the global re-initialization stage and fully-perceptual classification stage employing two identical CNNs associatively at the same time. On the global stage, we take on the backbone of one deep CNN e.g., dozens of layers in the front of Resnet-50 as a normal re-initialization subnetwork. Meanwhile, we apply our own proposed 3D-transpose technique into the backbone of the other CNN to form the 3D-transpose re-initialization subnetwork. The fully-perceptual stage is actually made up of the leftover layers of the original CNNs. On this stage, we take both the global representations learned at multiple hierarchies and the local representations uniformly-partitioned on the highest conv-layer into consideration, and then optimizing them separately for classification. ( i i) We introduce a new joint loss function in which our proposed Labels Interval Extension loss (LIEL) and Group loss (GL) are combined to enhance the performance of gradient decent as well as increasing the distances between image features with different identities. We apply the above DATRL, LIEL and GL to ReID thus obtaining DATRL-ReID. Experimental results on four datasets CUHK03, Market-1501, DukeMTMC-reID and MSMT17-V2 demonstrate that DATRL-ReID shows excellent performance in improving recognition accuracy and is superior to state-of-the-art methods.
机译:人员重新识别(Reid)旨在将人们与在公共空间中的非重叠相机视图相匹配,这通常被认为是与库中的行人图像匹配查询图像的图像检索问题。它充满挑战,因为存在许多困难,例如姿势错位,闭塞,在检测人们时类似的外观。现有研究Reid主要关注两个主要问题:代表学习和度量学习。在本文中,我们针对学习歧视性陈述,共缴纳两项贡献。 (i)我们提出了一个名为Deeply Assoiative两级陈述学习(DATRL)的新建筑。它包含全局重新初始化阶段和完全感知分类阶段,其同时采用两个相同的CNNS。在全球舞台上,我们占据了一个深度CNN的骨干,RESET-50前面的数十层作为正常的重新初始化子网。同时,我们将自己的建议的3D转置技术应用于其他CNN的骨干,以形成3D转置重新初始化子网。完全感知阶段实际上由原始CNN的剩余层组成。在此阶段,我们将在多个层次结构中学到的全局表示以及均匀分区的本地陈述考虑,然后分别优化它们进行分类。 (i)我们介绍了一种新的联合损失函数,其中我们提出的标签间隔延长丢失(Liel)和组丢失(GL)组合以增强梯度体现的性能,以及增加具有不同身份的图像特征之间的距离。我们将上述Datrl,Liel和GL应用于Reid,从而获得Datrl-Reid。四个数据集CUHK03,Market-1501,Dukemtmc-Reid和MSMT17-V2上的实验结果表明Datrl-Reid在提高识别准确性方面表现出出色的性能,并且优于最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号