首页> 外文会议>Chinese Automation Congress >Cross-Modality Multi-Task Deep Metric Learning for Sketch Face Recognition
【24h】

Cross-Modality Multi-Task Deep Metric Learning for Sketch Face Recognition

机译:跨模式多任务深度度量学习,用于草图人脸识别

获取原文

摘要

Sketch face recognition is to match face sketch images to photo images. The main challenge of it lies in cross-modality differences. To address this challenge, a variety of methods were proposed to bridge cross-modality gap of different modalities. Specially, common subspace-based methods have achieved great performance in this task. These methods enable the data of different modalmes to be comparable by mapping this data into a new and common subspace. However, the problem of non-linear distribution of samples from different modalities has not been well solved by these methods. In this paper, we propose a cross-modality multi-task deep metric learning (CMTDML) approach to address this problem. Firstly, we design a two-channel neural network to extract non-linear features of photo modality and sketch modality, and the parameter sharing characteristics can reduce the differences of features between different modalities. Secondly, we develop the loss function to constrain the features in common space, where intra-class compactness and inter-class separability of features are promoted. In extensive experiments and comparisons with the state-of-the-art methods, the CMTDML approach achieves marked improvements in most cases.
机译:素描面部识别是将面部素描图像与照片图像进行匹配。它的主要挑战在于跨模式差异。为了应对这一挑战,提出了多种方法来弥合不同模态的跨模态间隙。特别地,常见的基于子空间的方法在此任务中取得了出色的性能。通过将这些数据映射到新的公共子空间中,这些方法可以使不同模态的数据具有可比性。但是,这些方法尚未很好地解决来自不同模态的样本的非线性分布问题。在本文中,我们提出了一种跨模式多任务深度度量学习(CMTDML)方法来解决此问题。首先,我们设计了一个两通道神经网络来提取照片模态和草图模态的非线性特征,并且参数共享特征可以减少不同模态之间特征的差异。其次,我们开发了损失函数来约束公共空间中的特征,从而促进了类内的紧凑性和类间的可分离性。在广泛的实验和与最先进方法的比较中,CMTDML方法在大多数情况下均取得了显着改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号