首页> 外文期刊>International journal of human-computer interaction >Beyond Features for Recognition: Human-Readable Measures to Understand Users' Whole-Body Gesture Performance
【24h】

Beyond Features for Recognition: Human-Readable Measures to Understand Users' Whole-Body Gesture Performance

机译:识别功能之外:易于理解的措施,以了解用户的整体手势性能

获取原文
获取原文并翻译 | 示例
           

摘要

Understanding users' whole-body gesture performance quantitatively requires numerical gesture descriptors or features. However, the vast majority of gesture features that have been proposed in the literature were specifically designed for machines to recognize gestures accurately, which makes those features exclusively machine-readable. The complexity of such features makes it difficult for user interface designers, non-experts in machine learning, to understand and use them effectively (see, for instance, the Hu moment statistics or the Histogram of Gradients features), which reduces considerably designers' available options to describe users' whole-body gesture performance with legible and easily interpretable numerical measures. To address this problem, we introduce in this work a set of 17 measures that user interface practitioners can readily employ to characterize users' whole-body gesture performance with human-readable concepts, such as area, volume, or quantity. Our measures describe (1) spatial characteristics of body movement, (2) kinematic performance, and (3) body posture appearance for whole-body gestures. We evaluate our measures on a public dataset composed of 5,654 gestures collected from 30 participants, for which we report several gesture findings, e.g., participants performed body gestures in an average volume of space of 1.0m(3), with an average amount of hands movement of 14.6m, and a maximum body posture diffusion of 5.8m. We show the relationship between our gesture measures and recognition rates delivered by a template-based Nearest-Neighbor whole-body gesture classifier implementing the Dynamic Time Warping dissimilarity function. We also release BOGArT, the Body Gesture Analysis Toolkit, that automatically computes our measures. This work will empower researchers and practitioners with new numerical tools to reach a better understanding of how users perform whole-body gestures and thus, to use this knowledge to inform improved designs of whole-body gesture user interfaces.
机译:定量地了解用户的全身手势性能需要数字手势描述符或特征。但是,文献中提出的绝大多数手势功能都是专门为机器设计的,以使其能够准确识别手势,这使得这些功能只能由机器读取。这些功能的复杂性使用户界面设计人员(机器学习非专家)难以有效地理解和使用它们(例如,参见Hu矩统计量或梯度直方图功能),这大大减少了设计人员的可用空间用清晰易懂的数字量度描述用户全身手势性能的选项。为了解决这个问题,我们在这项工作中介绍了一套17种措施,用户界面从业人员可以轻松采用这些措施来通过人类可读的概念(例如面积,体积或数量)来表征用户的全身手势性能。我们的措施描述了(1)身体运动的空间特征,(2)运动学性能和(3)全身姿势的身体姿势外观。我们在由30个参与者收集的5,654个手势组成的公共数据集上评估我们的测量,我们报告了多个手势发现,例如,参与者在平均空间为1.0m(3)的空间中进行了平均平均手数的身体手势运动14.6m,最大身体姿势扩散5.8m。我们显示了实现基于动态时间规整差异功能的基于模板的近邻全身手势分类器所提供的手势度量与识别率之间的关系。我们还发布了BOGArT(人体姿势分析工具包),该工具包可自动计算我们的测量值。这项工作将使研究人员和从业人员能够使用新的数字工具,以更好地了解用户如何执行全身手势,从而使用此知识来为全身手势用户界面的改进设计提供信息。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号