首页> 外文期刊>IEEE/ACM Transactions on Networking >Lip Reading-Based User Authentication Through Acoustic Sensing on Smartphones
【24h】

Lip Reading-Based User Authentication Through Acoustic Sensing on Smartphones

机译:通过智能手机上的声音感应进行基于唇读的用户身份验证

获取原文
获取原文并翻译 | 示例
           

摘要

To prevent users' privacy from leakage, more and more mobile devices employ biometric-based authentication approaches, such as fingerprint, face recognition, voiceprint authentications, and so on, to enhance the privacy protection. However, these approaches are vulnerable to replay attacks. Although the state-of-art solutions utilize liveness verification to combat the attacks, existing approaches are sensitive to ambient environments, such as ambient lights and surrounding audible noises. Toward this end, we explore liveness verification of user authentication leveraging users' mouth movements, which are robust to noisy environments. In this paper, we propose a lip reading-based user authentication system, LipPass, which extracts unique behavioral characteristics of users' speaking mouths through acoustic sensing on smartphones for user authentication. We first investigate Doppler profiles of acoustic signals caused by users' speaking mouths and find that there are unique mouth movement patterns for different individuals. To characterize the mouth movements, we propose a deep learning-based method to extract efficient features from Doppler profiles and employ softmax function, support vector machine, and support vector domain description to construct multi-class identifier, binary classifiers, and spoofer detectors for mouth state identification, user identification, and spoofer detection, respectively. Afterward, we develop a balanced binary tree-based authentication approach to accurately identify each individual leveraging these binary classifiers and spoofer detectors with respect to registered users. Through extensive experiments involving 48 volunteers in four real environments, LipPass can achieve 90.2% accuracy in user identification and 93.1% accuracy in spoofer detection.
机译:为了防止用户隐私泄露,越来越多的移动设备采用基于生物特征的身份验证方法(例如指纹,面部识别,声纹身份验证等)来增强隐私保护。但是,这些方法很容易受到重放攻击。尽管最新的解决方案利用活动性验证来抵御攻击,但是现有方法对周围环境敏感,例如周围的灯光和周围的可听噪声。为此,我们探索了利用用户的嘴巴动作对用户身份验证进行生动性验证的方法,该方法可在嘈杂的环境中发挥作用。在本文中,我们提出了一种基于唇读的用户身份验证系统LipPass,该系统通过智能手机上的声音感应提取用户说话嘴的独特行为特征,以进行用户身份验证。我们首先研究了由用户的口型引起的声音信号的多普勒分布,发现针对不同个体存在独特的口型运动模式。为了表征嘴巴的运动,我们提出了一种基于深度学习的方法来从多普勒轮廓中提取有效特征,并采用softmax函数,支持向量机和支持向量域描述来构造嘴的多类别标识符,二进制分类器和spoofer检测器状态识别,用户识别和欺骗性检测。之后,我们开发了一种基于平衡二叉树的身份验证方法,以针对注册用户准确利用这些二叉分类器和spoofer检测器来识别每个人。通过在四个实际环境中进行的涉及48位志愿者的广泛实验,LipPass可以实现90.2%的用户识别准确度和93.1%的踩水检测准确度。

著录项

  • 来源
    《IEEE/ACM Transactions on Networking》 |2019年第1期|447-460|共14页
  • 作者单位

    Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China;

    Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China;

    Rutgers State Univ, Dept Elect & Comp Engn, New Brunswick, NJ 08854 USA;

    Indiana Univ Purdue Univ, Dept Comp Informat & Technol, Indianapolis, IN 46202 USA;

    Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China;

    Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China;

    Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Lip reading; user authentication; acoustic signals;

    机译:唇读;用户认证;声音信号;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号