首页> 外文学位 >Animating autonomous pedestrians.
【24h】

Animating autonomous pedestrians.

机译:动画自主行人。

获取原文
获取原文并翻译 | 示例

摘要

State-of-the-art computer graphics modeling and rendering techniques can be used to create photorealistic imagery of static objects, but they do not yet enable the automated animation of human beings with anywhere near as much fidelity. This thesis addresses the challenge. Our focus is the emulation of real pedestrians in urban environments. To this end, we develop an entirely autonomous pedestrian model that requires no centralized, global control whatsoever and is capable of performing a variety of activities in synthetic urban spaces, such as a virtual train station. The comprehensive artificial life modeling approach we adopt integrates motor, perceptual, behavioral, and cognitive components, making each of our virtual pedestrians a highly capable individual.; To support a variety of natural interactions between pedestrians and their environment, we represent the latter using hierarchical data structures that efficiently execute the perceptual queries of pedestrians to sustain their behavioral responses and enable them to plan their actions on global and local scales.; The animation system implemented using the above models enables us to run long-term simulations of pedestrians in fairly large urban environments without manual intervention. Real-time simulation can be achieved for over a thousand autonomous pedestrians. With each pedestrian under his/her own autonomous control, the characters imbue the virtual world with liveliness, social (dis)order, and a realistically complex dynamic.; In addition to the automated animation of human activity in a virtual train station, we demonstrate our autonomous pedestrian simulator in the context of virtual archaeology for visualizing urban social life in reconstructed archaeological sites. Our pedestrian simulator has also served as the basis of a testbed for designing and experimenting with visual sensor networks in the field of computer vision.
机译:先进的计算机图形建模和渲染技术可用于创建静态对象的逼真图像,但它们仍无法使人类以逼真度逼真的自动动画制作。本文解决了这一挑战。我们的重点是模拟城市环境中的真实行人。为此,我们开发了一种完全自治的行人模型,该模型不需要任何集中的全局控制,并且能够在人工合成的城市空间(例如虚拟火车站)中执行各种活动。我们采用的综合人工生活建模方法将运动,感性,行为和认知组件集成在一起,使我们的每个虚拟行人都具备出色的能力。为了支持行人与周围环境之间的各种自然互动,我们使用分层数据结构表示后者,该数据结构有效地执行行人的感知查询,以维持行人的行为响应,并使他们能够在全球和本地范围内计划其行为。使用上述模型实现的动画系统使我们能够在相当大的城市环境中对行人进行长期模拟,而无需人工干预。可以对一千多名自主行人进行实时仿真。每个行人都在自己的自主控制之下,角色赋予虚拟世界生动活泼,社会(混乱)的秩序和逼真的复杂动态。除了在虚拟火车站中自动进行人类活动动画之外,我们还在虚拟考古学中展示了我们的自动行人模拟器,以可视化重建考古现场中的城市社会生活。我们的行人模拟器也已成为在计算机视觉领域设计和试验视觉传感器网络的测试平台的基础。

著录项

  • 作者

    Shao, Wei.;

  • 作者单位

    New York University.;

  • 授予单位 New York University.;
  • 学科 Computer Science.; Artificial Intelligence.
  • 学位 Ph.D.
  • 年度 2006
  • 页码 184 p.
  • 总页数 184
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;人工智能理论;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号