Meeting and lecture room technology is a burgeoning field. Such technology can provide real-time support for physically present participants, for online remote participation, or for offline access to meetings or lectures. Capturing relevant information from meetings or lectures is necessary to provide this kind of support.Multimedia presentation of this captured information requires a lot of attention. Our previous research has looked at including in these multimedia presentations a regeneration of meeting events and interactions in virtual reality. We developed technology that translates captured meeting activities into a virtual-reality version that lets us add and manipulate information.1 In that research, our starting point was the human presenter or meeting participant. Here, it’s a semiautonomous virtual presenter that performs in a virtual- reality environment (see figure 1). The presenter’s audience might consist of humans, humans represented by embodied virtual agents, and autonomous agents that are visiting the virtual lecture room or have roles in it. In this article, we focus on models and associated algorithms that steer the virtual presenter’s presentation animations. In our approach, we generate the presentations from a script describing the synchronization of speech, gestures, and movements. The script has also a channel devoted to presentation sheets (slides) and sheet changes, which we assume are an essential part of the presentation. This channel can also present material other than sheets, such as annotated paintings or movies.
展开▼