首页> 外文OA文献 >Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information
【2h】

Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information

机译:在虚拟世界中演示:面向3D演示者的体系结构,解释2D呈现的信息

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Meeting and lecture room technology is a burgeoning field. Such technology can provide real-time support for physically present participants, for online remote participation, or for offline access to meetings or lectures. Capturing relevant information from meetings or lectures is necessary to provide this kind of support.Multimedia presentation of this captured information requires a lot of attention. Our previous research has looked at including in these multimedia presentations a regeneration of meeting events and interactions in virtual reality. We developed technology that translates captured meeting activities into a virtual-reality version that lets us add and manipulate information.1 In that research, our starting point was the human presenter or meeting participant. Here, it’s a semiautonomous virtual presenter that performs in a virtual- reality environment (see figure 1). The presenter’s audience might consist of humans, humans represented by embodied virtual agents, and autonomous agents that are visiting the virtual lecture room or have roles in it. In this article, we focus on models and associated algorithms that steer the virtual presenter’s presentation animations. In our approach, we generate the presentations from a script describing the synchronization of speech, gestures, and movements. The script has also a channel devoted to presentation sheets (slides) and sheet changes, which we assume are an essential part of the presentation. This channel can also present material other than sheets, such as annotated paintings or movies.
机译:会议室和演讲室技术是一个新兴领域。这样的技术可以为实际出席的参与者,在线远程参与或脱机访问会议或讲座提供实时支持。为了提供这种支持,有必要从会议或讲座中捕获相关信息。捕获信息的多媒体呈现需要引起很多关注。我们以前的研究着眼于在这些多媒体演示中包括虚拟现实中会议事件和交互的再生。我们开发了将捕获的会议活动转换为虚拟现实版本的技术,该虚拟现实版本使我们能够添加和操作信息。1在这项研究中,我们的出发点是人类演讲者或会议参与者。在这里,它是一个在虚拟现实环境中执行的半自治虚拟演示器(请参见图1)。演示者的听众可能包括人,以具体化的虚拟代理人代表的人员以及正在访问虚拟教室或在其中扮演角色的自治代理。在本文中,我们重点介绍可引导虚拟演示者的演示动画的模型和相关算法。在我们的方法中,我们从脚本生成演示,以描述语音,手势和动作的同步。该脚本还有一个专门介绍演示文稿(幻灯片)和工作表更改的渠道,我们认为这是演示文稿的重要组成部分。该通道还可以显示除纸张之外的其他材料,例如带注释的绘画或电影。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号