首页> 外文会议>Asian Conference on Computer Vision >Better Guider Predicts Future Better: Difference Guided Generative Adversarial Networks
【24h】

Better Guider Predicts Future Better: Difference Guided Generative Adversarial Networks

机译:更好的指导器预测未来更好:差异导向生成的对抗性网络

获取原文

摘要

Predicting the future is a fantasy but practicality work. It is the key component to intelligent agents, such as self-driving vehicles, medical monitoring devices and robotics. In this work, we consider generating unseen future frames from previous observations, which is notoriously hard due to the uncertainty in frame dynamics. While recent works based on generative adversarial networks (GANs) made remarkable progress, there is still an obstacle for making accurate and realistic predictions. In this paper, we propose a novel CAN based on inter-frame difference to circumvent the difficulties. More specifically, our model is a multi-stage generative network, which is named the Difference Guided Generative Adversarial Network (DGGAN). The DGGAN learns to explicitly enforce future-frame predictions that is guided by synthetic inter-frame difference. Given a sequence of frames, DGGAN first uses dual paths to generate meta information. One path, called Coarse Frame Generator, predicts the coarse details about future frames, and the other path, called Difference Guide Generator, generates the difference image which include complementary fine details. Then our coarse details will then be refined via guidance of difference image under the support of GANs. With this model and novel architecture, we achieve state-of-the-art performance for future video prediction on UCF-101, KITTI.
机译:预测未来是一个幻想,但实用性的工作。它是智能代理的关键组件,如自动驾驶车辆,医疗监控设备和机器人。在这项工作中,我们考虑从先前的观察结果生成未来的未来框架,这是由于帧动态的不确定性而难以困扰。虽然最近的作品基于生成的对抗网络(GANS)取得了显着的进展,但仍然有一种准确和现实预测的障碍。在本文中,我们提出了一种基于帧间差异来规避困难的新颖。更具体地说,我们的模型是一个多级生成网络,该网络被命名为差异导向生成的对策网络(DGGan)。 DGGan学习明确强制执行由综合帧间差异引导的未来帧预测。给定一系列帧,DGGan首先使用双路径来生成元信息。一条名为粗帧生成器的路径,预测了关于未来帧的粗略细节,以及称为差异指南生成器的另一条路径生成包括互补细节的差异图像。然后我们的粗略细节将通过GAN的支持下的差异图像的指导来精制。通过这种模式和新颖的架构,我们实现了UCF-101,Kitti的未来视频预测的最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号