首页> 外文会议>European conference on computer vision >Learning Temporal Transformations from Time-Lapse Videos
【24h】

Learning Temporal Transformations from Time-Lapse Videos

机译:从延时视频中学习时间转变

获取原文

摘要

Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models.
机译:根据对自然界中物理,化学和生物现象的终生观察,人类通常可以轻松地在脑海中描绘出未来物体的外观。但是,计算机呢?在本文中,我们从延时视频中学习了对象变换的计算模型。特别是,我们探索了使用生成模型来创建对象在将来的描述。这些模型探索了几种不同的预测任务:给定对象的单个描述即可生成未来状态,给定对象在不同时间的两个描述即可生成未来状态,并在递归框架中递归生成未来状态。我们对生成的结果进行定性和定量评估,并进行人工评估以比较模型的变化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号