首页> 外文会议>Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII >The image interpretation workstation of the future: lessons learned
【24h】

The image interpretation workstation of the future: lessons learned

机译:未来的图像解释工作站:经验教训

获取原文
获取原文并翻译 | 示例

摘要

In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our image-interpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretation-workstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitor-setup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.
机译:近年来,专业使用的工作站变得越来越复杂,多显示器系统也越来越普遍。已开发出诸如手势识别之类的新型交互技术,但主要用于娱乐和游戏目的。这些人机界面尚未在可以极大改善用户体验的专业环境中广泛使用。为了解决这个问题,我们在未来的图像解释工作站中结合了现有工具,该工作站是一个由四个屏幕组成的多显示器工作场所。每个屏幕都专用于图像解释过程中的特殊任务:地理信息系统(用于对图像进行地理参考并为用户提供空间参考),交互式识别支持工具,注释工具和报告工具。为了进一步支持图像解释的复杂任务,除了更常见的技术(如触摸屏,面部识别和语音识别)之外,还使用了自行开发的用于头姿势估计和手部跟踪的交互系统。进行了一组实验,以评估不同交互系统的可用性。军事人员设计并批准了两个典型的图像解释任务。然后,在仅使用键盘和鼠标的情况下,使用当前图像解释工作站的安装程序对它们进行了测试,以应对我们未来的图像解释工作站。为了更详细地了解交互技术在多显示器设置中的有用性,使用日常任务激发的测试进一步评估了手部跟踪,头部姿势估计和面部识别。本文介绍了评估结果和讨论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号