首页> 外文会议>International Conference of the Biometrics Special Interest Group >Compact Models for Periocular Verification Through Knowledge Distillation
【24h】

Compact Models for Periocular Verification Through Knowledge Distillation

机译:通过知识蒸馏进行眼周验证的紧凑模型

获取原文

摘要

Despite the wide use of deep neural network for periocular verification, achieving smaller deep learning models with high performance that can be deployed on low computational powered devices remains a challenge. In term of computation cost, we present in this paper a lightweight deep learning model with only 1.1m of trainable parameters, DenseNet-20, based on DenseNet architecture. Further, we present an approach to enhance the verification performance of DenseNet-20 via knowledge distillation. With the experiments on VISPI dataset captured with two different smartphones, iPhone and Nokia, we show that introducing knowledge distillation to DenseNet-20 training phase outperforms the same model trained without knowledge distillation where the Equal Error Rate (EER) reduces from 8.36% to 4.56% EER on iPhone data, from 5.33% to 4.64% EER on Nokia data, and from 20.98% to 15.54% EER on cross-smartphone data.
机译:尽管深度神经网络广泛用于眼周验证,但要实现可在低计算能力的设备上部署的,高性能的小型深度学习模型,仍然是一个挑战。在计算成本方面,我们在本文中提出了一个基于DenseNet架构的轻量级深度学习模型,该模型只有1.1m的可训练参数DenseNet-20。此外,我们提出了一种通过知识蒸馏来增强DenseNet-20验证性能的方法。通过使用两个不同的智能手机(iPhone和诺基亚)捕获的VISPI数据集进行的实验,我们表明,将知识精炼引入DenseNet-20培训阶段要优于未经过知识精炼的相同模型,其平均错误率(EER)从8.36%降低至4.56 iPhone数据的EER为%,诺基亚数据的EER为5.33%至4.64%,而跨智能手机数据的EER为20.98%至15.54%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号