首页> 外文会议>European conference on computer vision >DYAN: A Dynamical Atoms-Based Network for Video Prediction
【24h】

DYAN: A Dynamical Atoms-Based Network for Video Prediction

机译:DYAN:基于动态原子的视频预测网络

获取原文
获取外文期刊封面目录资料

摘要

The ability to anticipate the future is essential when making real time critical decisions, provides valuable information to understand dynamic natural scenes, and can help unsupervised video representation learning. State-of-art video prediction is based on complex architectures that need to learn large numbers of parameters, are potentially hard to train, slow to run, and may produce blurry predictions. In this paper, we introduce DYAN, a novel network with very few parameters and easy to train, which produces accurate, high quality frame predictions, faster than previous approaches. DYAN owes its good qualities to its encoder and decoder, which are designed following concepts from systems identification theory and exploit the dynamics-based invariants of the data. Extensive experiments using several standard video datasets show that DYAN is superior generating frames and that it generalizes well across domains.
机译:在做出实时关键决策时,预测未来的能力至关重要,它可以提供有价值的信息来理解动态自然场景,并可以帮助无监督的视频表示学习。最新的视频预测基于需要学习大量参数,可能难以训练,运行缓慢并可能产生模糊预测的复杂体系结构。在本文中,我们介绍了DYAN,它是一种参数很少,易于训练的新型网络,它可以比以前的方法更快地生成准确,高质量的帧预测。 DYAN的优良品质归功于其编码器和解码器,它们是根据系统识别理论的概念进行设计的,并利用了基于动力学的数据不变性。使用几个标准视频数据集进行的大量实验表明,DYAN是出色的生成帧,并且可以在各个域中很好地泛化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号