首页> 外文学位 >Design and evaluation of the Human-Biometric Sensor Interaction Method.
【24h】

Design and evaluation of the Human-Biometric Sensor Interaction Method.

机译:人与生物传感器交互方法的设计和评估。

获取原文
获取原文并翻译 | 示例

摘要

This research investigates the development and testing of the Human-Biometric Sensor Interaction Evaluation Method that used ergonomics, usability, and image quality criteria as explanatory variables of overall biometric system performance to evaluate swipe-based fingerprint recognition devices. The HBSI method was proposed because of questions regarding the thoroughness of traditional testing and performance evaluation metrics such as FTA, FTE, FAR, and FRR used in standardized evaluation methods; questioning if traditional metrics were acceptable enough to fully test and understand biometric systems, or determine if important data were not being collected.;The Design and Evaluation of the Human-Biometric Sensor Interaction Method had four objectives: (a) analyze the literature to determine what influences the interaction of humans and biometric devices, (b) develop a conceptual model based on previous research, (c) design two alternative swipe fingerprint sensors, and (d) to compare how people interact with the commercial and designed swipe fingerprint sensors, to examine if changing the form factor improves the usability of the device in terms of the proposed HBSI evaluation method.;Data was collected from 85 individuals over 3 visits that accounted for 33,394 interactions with the 4 sensors used. The HBSI Evaluation Method provided additional detail about how users interact with the devices collecting data on: image quality, number of detected minutiae, fingerprint image size, fingerprint image contrast, user satisfaction, task time, task completeness, user effort, number of assists; in addition to traditional biometric testing and reporting metrics of: acquisition failures (FTA), enrollment failures (FTE), and matching performance (FAR and FRR).;Results from the HBSI Evaluation Method revealed that traditional biometric evaluations that focus on system-reported metrics are not providing sufficient reporting details. For example, matching performance for right and left index finger reported a FRR under 1% for all sensors at the operational point 0.1% FAR: UPEK (0.24%), PUSH (0.98%), PULL (0.36%), and large area (0.34%). However, the FTA rate was 11.28% and accounted for 3,768 presentations. From this research, two metrics previously unaccounted for and contained in the traditional FTA rate: Failure to Present (FTP) and False Failure to Present (FFTP) were created to better understand human interaction with biometric sensors and attribute errors accordingly. The FTP rate accounted for 1,187 of the 3,768 (31.5%) of interactions traditionally labeled as FTAs. The FFTP was much smaller at 0.35%, but can provide researchers further insight to help explain abnormal behaviors in matching rates, ROC and DET curves. In addition, traditional metrics of image quality and number of detected minutiae did not reveal a statistical difference across the sensors, however HBSI metrics of fingerprint image size and contrast did reveal a statistical difference, indicating the design of the PUSH sensor provided images of less gray level variation, while the PULL sensor provided images of larger pixel consistency during some of the data collection visits. The level of learning or habituation was also documented in this research through three metrics: task completion, Maximum User Effort (MUE), and the number of assists provided. All three reported the PUSH with the lowest rates, but improved the most over the visits, which was a function of learning how to use a "push"-based swipe sensor, as opposed to the "pull" swipe type.;Overall the HBSI Evaluation Method provided the foundation for the future of biometric evaluations as it linked system feedback from erroneous interactions to the human-sensor interaction that caused the failure. This linkage will enable system developers and researchers the ability to re-examine the data to see if the errors are the result of the algorithm or human interaction that can be solved with revised training techniques, design modifications, or other adjustments in the future.
机译:这项研究调查了人类-生物特征传感器交互评估方法的开发和测试,该方法使用人体工程学,可用性和图像质量标准作为整体生物特征系统性能的解释变量,以评估基于滑动的指纹识别设备。之所以提出HBSI方法,是因为有关标准化评估方法中使用的传统测试和性能评估指标(如FTA,FTE,FAR和FRR)的完整性问题。质疑传统度量标准是否足以接受以充分测试和理解生物识别系统,或确定是否未收集重要数据。;人-生物识别传感器相互作用方法的设计和评估具有四个目标:(a)分析文献以确定什么会影响人与生物识别设备的交互,(b)基于先前的研究开发概念模型,(c)设计两个替代的滑动指纹传感器,以及(d)比较人们如何与商业化和设计的滑动指纹传感器进行交互,来检查改变形状因子是否可以改善所提出的HBSI评估方法的设备可用性。数据来自3次访问中的85个人,这些数据与所使用的4个传感器进行了33,394次交互。 HBSI评估方法提供了有关用户如何与收集数据的设备进行交互的更多详细信息:图像质量,检测到的细节数量,指纹图像大小,指纹图像对比度,用户满意度,任务时间,任务完成度,用户工作量,辅助次数;除了传统的生物特征测试和报告指标,包括:获取失败(FTA),注册失败(FTE)和匹配性能(FAR和FRR)。; HBSI评估方法的结果表明,传统的生物特征评估着重于系统报告指标未提供足够的报告详细信息。例如,左右食指的匹配性能报告了所有传感器在操作点0.1%FAR时的FRR低于1%:UPEK(0.24%),PUSH(0.98%),PULL(0.36%)和大面积( 0.34%)。但是,自由贸易协定率为11.28%,占3,768份报告。从这项研究中,创建了传统FTA率之前无法解释和包含的两个度量:呈现失败(FTP)和错误呈现失败(FFTP),以更好地理解人类与生物识别传感器的交互作用并相应地归因于错误。在传统上标记为FTA的3,768次交互中,FTP速率占了1,187次(31.5%)。 FFTP较小,为0.35%,但可以为研究人员提供进一步的见解,以帮助解释匹配率,ROC和DET曲线中的异常行为。此外,传统的图像质量指标和检测到的细微之处数量并未揭示传感器之间的统计差异,但是指纹图像尺寸和对比度的HBSI指标确实揭示了统计差异,这表明PUSH传感器的设计提供了较少的灰度图像电平变化,而PULL传感器在某些数据收集访问期间提供了较大像素一致性的图像。在这项研究中,还通过三个指标记录了学习或习惯程度:任务完成,最大用户努力(MUE)和提供的帮助数量。这三个报告的PUSH率均最低,但在访问中获得了最大的改善,这是学习如何使用基于“推”的滑动传感器而不是“拉”滑动类型的功能。评估方法为生物特征评估的未来提供了基础,因为它将错误反馈的系统反馈与导致失败的人与传感器的相互作用联系在一起。这种联系将使系统开发人员和研究人员能够重新检查数据,以查看错误是否是算法或人为交互的结果,可以通过修改培训技术,设计修改或将来的其他调整来解决。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号