首页> 外文期刊>Interacting with Computers >Shared Input Multimodal Mobile Interfaces: Interaction Modality Effects on Menu Selection in Single-Task and Dual-Task Environments
【24h】

Shared Input Multimodal Mobile Interfaces: Interaction Modality Effects on Menu Selection in Single-Task and Dual-Task Environments

机译:共享输入多模式移动接口:交互模式对单任务和双任务环境中菜单选择的影响

获取原文
获取原文并翻译 | 示例
       

摘要

Audio and visual modalities are two common output channels in the user interfaces embedded in today's mobile devices. However, these user interfaces are typically centered on the visual modality as the primary output channel, with audio output serving a secondary role. This paper argues for an increased need for shared input multimodal user interfaces for mobile devices. A shared input multimodal interface can be operated independently using a specific output modality, leaving users to choose the preferred method of interaction in different scenarios. We evaluate the value of a shared input multimodal menu system both in a single-task desktop setting and in a dynamic dual-task setting, in which the user was required to interact with the shared input multimodal menu system while driving a simulated vehicle. Results indicate that users were faster at locating a target item in the menu when visual feedback was provided in the single-task desktop setting, but in the dual-task driving setting, visual output presented a significant source of visual distraction that interfered with driving performance. In contrast, auditory output mitigated some of the risk associated with menu selection while driving. A shared input multimodal interface allows users to take advantage of multiple feedback modalities properly, providing a better overall experience.
机译:音频和视频形式是当今移动设备中嵌入的用户界面中的两个常见输出通道。但是,这些用户界面通常以视觉模式为中心作为主要输出通道,而音频输出则充当​​次要角色。本文提出了对移动设备共享输入多模式用户界面的日益增长的需求。共享的输入多模式界面可以使用特定的输出模式独立运行,从而使用户可以选择在不同情况下进行交互的首选方法。我们在单任务桌面设置和动态双任务设置中评估共享输入多模式菜单系统的价值,在动态双任务设置中,要求用户在驾驶模拟车辆时与共享输入多模式菜单系统进行交互。结果表明,在单任务桌面设置中提供视觉反馈时,用户在菜单中定位目标项目的速度更快,但是在双任务驾驶设置中,视觉输出提供了明显的视觉干扰源,从而干扰了驾驶性能。相反,听觉输出减轻了驾驶时与菜单选择相关的一些风险。共享的输入多模式界面允许用户正确利用多种反馈模式,从而提供更好的整体体验。

著录项

  • 来源
    《Interacting with Computers》 |2013年第5期|386-403|共18页
  • 作者单位

    Department of Computer Science, National University of Singapore, 13 Computing Drive, Computing 2, #01-04, Singapore 117417;

    UCL Interaction Centre, University College London, Gower Street, London WC1E 6BT, UK;

    Knowledge Media Design Institute (KMDI), University of Toronto, 27 King's College Circle, Toronto, Ont., Canada M5S 1A1;

    Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104, USA;

    National University of Singapore, 13 Computing Drive, Computing 2, #01-04, Singapore 117417;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    earPod; eyes-free; shared-input multimodal interfaces;

    机译:earPod;无眼共享输入多模式接口;
  • 入库时间 2022-08-18 02:47:53

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号