首页> 外文期刊>ACM Transactions on Computer-Human Interaction >Improv: An Input Framework for Improvising Cross-Device Interaction by Demonstration
【24h】

Improv: An Input Framework for Improvising Cross-Device Interaction by Demonstration

机译:Improv:通过演示改善跨设备交互的输入框架

获取原文
获取原文并翻译 | 示例
           

摘要

As computing devices become increasingly ubiquitous, it is now possible to combine the unique capabilities of different devices or Internet of Things to accomplish a task. However, there is currently a high technical barrier for creating cross-device interaction. This is especially challenging for end users who have limited technical expertise-end users would greatly benefit from custom cross-device interaction that best suits their needs. In this article, we present Improv, a cross-device input framework that allows a user to easily leverage the capability of additional devices to create new input methods for an existing, unmodified application, e.g., creating custom gestures on a smartphone to control a desktop presentation application. Instead of requiring developers to anticipate and program these cross-device behaviors in advance, Improv enables end users to improvise them on the fly by simple demonstration, for their particular needs and devices at hand. We showcase a range of scenarios where Improv is used to create a diverse set of useful cross-device input. Our study with 14 participants indicated that on average it took a participant 10 seconds to create a cross-device input technique. In addition, Improv achieved 93.7% accuracy in interpreting user demonstration of a target UI behavior by looking at the raw input events from a single example.
机译:随着计算设备变得越来越普及,现在可以结合不同设备或物联网的独特功能来完成一项任务。但是,当前创建跨设备交互存在很高的技术障碍。对于技术专业知识有限的最终用户而言,这尤其具有挑战性,因为最终用户将从最适合其需求的自定义跨设备交互中受益匪浅。在本文中,我们介绍了跨设备输入框架Improv,该框架使用户可以轻松利用其他设备的功能为现有的未修改应用程序创建新的输入方法,例如,在智能手机上创建自定义手势来控制桌面演示文稿应用程序。 Improv无需要求开发人员预先预测和编程这些跨设备的行为,而是使最终用户可以通过简单的演示即时满足他们的特定需求和手头的设备,从而即兴创作它们。我们展示了一系列使用Improv来创建各种有用的跨设备输入的场景。我们对14名参与者的研究表明,参与者平均需要10秒钟来创建跨设备输入技术。此外,通过查看来自单个示例的原始输入事件,Improv在解释用户对目标UI行为的演示方面达到了93.7%的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号