首页> 外文学位 >Parasocial Consensus Sampling: Modeling Human Nonverbal Behaviors from Multiple Perspectives.
【24h】

Parasocial Consensus Sampling: Modeling Human Nonverbal Behaviors from Multiple Perspectives.

机译:超社会共识抽样:从多个角度对人类非语言行为进行建模。

获取原文
获取原文并翻译 | 示例

摘要

Virtual humans are embodied software agents designed to simulate the appearance and social behaviors of humans, typically with the goal of facilitating natural interactions between humans and computers. They play an important role in the advancement of today's immersive virtual worlds, including domains such as virtual training (Swartout et al., 2006), education (Rowe et al., 2010), and health care (Bickmore et al., 2010).;One of the key challenges in creating virtual humans is giving them human-like nonverbal behaviors. There has been extensive research on analyzing and modeling human nonverbal behaviors. Some of them rely on results from observing and manually analyzing human behaviors, while others approach the problem by exploring advanced machine learning techniques on large amounts of annotated human behavior data. However, little attention has been paid to the "data" these systems learn from.;In this thesis, we propose a new methodology called Parasocial Consensus Sampling (PCS) to approach the problem of modeling human nonverbal behaviors from the "data" perspective. It is based on previous research on Parasocial Interaction theory (Horton & Wohl, 1956). The basic idea of Parasocial Consensus Sampling is to have multiple independent participants experience the same social situation parasocially (i.e. act "as if" they were in a real dyadic interaction) in order to gain insight into the typicality of how individuals would behave within face-to-face interactions.;First, we validate this framework by applying it to model listener backchannel feedback and turn-taking behavior. The results demonstrate that (1) people are able to provide valid behavioral data in parasocial interaction, (2) PCS data generates better virtual human behaviors and (3) can be used to learn better prediction models for virtual human. Second, we show that the PCS framework can help us tease apart the causalities of the variability of human behavior in face-to-face interactions. Such research work would be difficult to perform by traditional approaches. Moreover, PCS enables much larger scale and more efficient data collection method than traditional face-to-face interaction. Finally, we integrate the PCS-data driven models into a virtual human system, and compare it with a state-of-the-art virtual human application, the Rapport Agent (Gratch et al., 2007) in real interactions. Human subjects are asked to evaluate the performance of each agent regarding the correctness of the agents' behaviors, the rapport they feel during the interactions and the overall naturalness. The results suggest that the new agent predicts the timing of backchannel feedback and end-of-turn more precisely, performs more natural behaviors and thereby creates much stronger feeling of rapport between users and agents.
机译:虚拟人是具体化的软件代理,旨在模拟人的外表和社交行为,通常旨在促进人与计算机之间的自然交互。它们在当今沉浸式虚拟世界的发展中起着重要作用,包括虚拟培训(Swartout等,2006),教育(Rowe等,2010)和医疗保健(Bickmore等,2010)等领域。 。;创建虚拟人的主要挑战之一是赋予他们类似人的非语言行为。在分析和建模人类非语言行为方面已经进行了广泛的研究。其中一些依赖于观察和手动分析人类行为的结果,而另一些则通过在大量带注释的人类行为数据上探索先进的机器学习技术来解决该问题。但是,这些系统从中学习的“数据”却很少受到关注。在本文中,我们提出了一种称为超社会共识抽样(PCS)的新方法,用于从“数据”的角度来解决人类非语言行为的建模问题。它基于先前对超社会互动理论的研究(Horton&Wohl,1956)。超社会共识抽样的基本思想是让多个独立的参与者在社会交往中经历相同的社会状况(即“好像”他们处于真实的二元互动中),以便深入了解个人在面部表情中的典型行为。首先,我们通过将其应用于模型侦听器反向通道反馈和转弯行为来验证此框架。结果表明:(1)人们能够在超社会互动中提供有效的行为数据;(2)PCS数据生成更好的虚拟人的行为;(3)可用于学习更好的虚拟人预测模型。其次,我们证明了PCS框架可以帮助我们弄清面对面互动中人类行为变异性的因果关系。通过传统方法很难进行此类研究工作。而且,与传统的面对面交互相比,PCS支持更大的规模和更有效的数据收集方法。最后,我们将PCS数据驱动的模型集成到虚拟人系统中,并将其与最新的虚拟人应用程序Rapport Agent(Gratch等人,2007)进行实际交互。要求人类受试者评估每种行为者的行为,包括行为者的正确性,他们在互动过程中所感受到的融洽以及整体自然性。结果表明,新的代理可以更准确地预测反向通道反馈和转折结束的时间,执行更自然的行为,从而在用户和代理之间建立更融洽的感觉。

著录项

  • 作者

    Huang, Lixing.;

  • 作者单位

    University of Southern California.;

  • 授予单位 University of Southern California.;
  • 学科 Computer Science.;Psychology Social.
  • 学位 Ph.D.
  • 年度 2013
  • 页码 125 p.
  • 总页数 125
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号