首页> 外文会议>ICCV 2005 Workshop on Computer Vision in Human-Computer Interaction(HCI); 20051021; Beijing(CN) >Tracking Body Parts of Multiple People for Multi-person Multimodal Interface
【24h】

Tracking Body Parts of Multiple People for Multi-person Multimodal Interface

机译:用于多人多模式界面的多人跟踪身体部位

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Although large displays could allow several users to work together and to move freely in a room, their associated interfaces are limited to contact devices that must generally be shared. This paper describes a novel interface called SHIVA (Several-Humans Interface with Vision and Audio) allowing several users to interact remotely with a very large display using both speech and gesture. The head and both hands of two users are tracked in real time by a stereo vision based system. From the body parts position, the direction pointed by each user is computed and selection gestures done with the second hand are recognized. Pointing gesture is fused with n-best results from speech recognition taking into account the application context. The system is tested on a chess game with two users playing on a very large display.
机译:尽管大型显示器可以允许多个用户一起工作并在一个房间中自由移动,但是其关联的界面仅限于通常必须共享的接触设备。本文介绍了一种新颖的界面,称为SHIVA(具有视觉和音频的多人界面),允许多个用户使用语音和手势与大型显示器进行远程交互。基于立体视觉的系统实时跟踪两个用户的头部和双手。从身体部位的位置,计算每个用户指向的方向,并识别出用第二只手做出的选择手势。考虑到应用程序上下文,将指向手势与语音识别的n个最佳结果融合在一起。该系统在国际象棋游戏上进行了测试,有两个用户在非常大的显示器上玩。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号