首页> 外文会议>IFIP TC 13 International Conference on Human-Computer Interaction >Blind FLM: An Enhanced Keystroke-Level Model for Visually Impaired Smartphone Interaction
【24h】

Blind FLM: An Enhanced Keystroke-Level Model for Visually Impaired Smartphone Interaction

机译:盲FLM:用于视觉受损的智能手机交互的增强的击键级模型

获取原文

摘要

The Keystroke-Level Model (KLM) is a predictive model used to numerically predict how long it takes an expert user to accomplish a task. KLM has been successfully used to model conventional interactions, however, it does not thoroughly render smartphone touch interactions or accessible interfaces (e.g. screen readers). On the other hand, the Fingerstroke-level Model (FLM) extends KLM to describe and assess mobile-based game applications, which marks it as a candidate model for predicting smartphone touch interactions. This paper aims to further extend FLM for visually impaired smartphone users. An initial user study identified basic elements of blind users' interactions that were used to extend FLM; the new model is called "Blind FLM'". Then an additional user study was conducted to determine the applicability of the new model for describing blind users' touch interactions with a smartphone, and to compute the accuracy of the new model. Blind FLM evaluation showed that it can predict blind users' performance with an average error of 2.36%.
机译:击键级模型(KLM)是一种预测模型,用于以数字方式预测专家用户完成任务所需的时间。 KLM已成功用于建模常规交互,但是,它无法彻底呈现智能手机的触摸交互或可访问的界面(例如屏幕阅读器)。另一方面,指划级模型(FLM)扩展了KLM来描述和评估基于移动的游戏应用程序,从而将其标记为预测智能手机触摸互动的候选模型。本文旨在为视障智能手机用户进一步扩展FLM。最初的用户研究确定了用于扩展FLM的盲用户交互的基本元素;新模型称为“盲FLM”。然后进行了另一项用户研究,以确定新模型的适用性,该新模型可用于描述盲人与智能手机的触摸交互,并计算新模型的准确性。盲人FLM评估显示,它可以预测盲人用户的表现,平均误差为2.36%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号