首页> 外文会议>International Conference on Affective Computing and Intelligent Interaction(ACII 2007); 20070912-14; Lisbon(PT) >User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements
【24h】

User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements

机译:通过全身运动以用户为中心的音频和视觉表达反馈控制

获取原文
获取原文并翻译 | 示例

摘要

In this paper we describe a system allowing users to express themselves through their full-body movement and gesture and to control in real-time the generation of an audio-visual feedback. The systems analyses in real-time the user's full-body movement and gesture, extracts expressive motion features and maps the values of the expressive motion features onto real-time control of acoustic parameters for rendering a music performance. At the same time, a visual feedback generated in real-time is projected on a screen in front of the users with their coloured silhouette, depending on the emotion their movement communicates. Human movement analysis and visual feedback generation were done with the EyesWeb software platform and the music performance rendering with pDM. Evaluation tests were done with human participants to test the usability of the interface and the effectiveness of the design.
机译:在本文中,我们描述了一个系统,该系统允许用户通过其全身运动和手势来表达自己,并实时控制视听反馈的生成。该系统实时分析用户的全身运动和手势,提取表情运动特征并将表情运动特征的值映射到声学参数的实时控制上,以渲染音乐表演。同时,实时生成的视觉反馈会根据用户的运动传达的情感,以彩色的轮廓投射到用户面前的屏幕上。人体运动分析和视觉反馈生成通过EyesWeb软件平台完成,而音乐表演则通过pDM进行渲染。对人类参与者进行了评估测试,以测试界面的可用性和设计的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号