首页> 外文期刊>Applied Sciences >Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders
【24h】

Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders

机译:通过在封装变分性自动化器中实施学习限​​制来提高生成和鉴别性建模性能

获取原文
           

摘要

Learning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning paradigm could hardly yield optimal performance on both generative and discriminative tasks. To this end, in this research, we harness the power of two neuroscience-inspired learning constraints, that is, dependence minimisation and regularisation constraints, to improve generative and discriminative modelling performance of a deep generative model. To demonstrate the usage of these learning constraints, we introduce a novel deep generative model: encapsulated variational autoencoders (EVAEs) to stack two different variational autoencoders together with their learning algorithm. Using the MNIST digits dataset as a demonstration, the generative modelling performance of EVAEs was improved with the imposed dependence-minimisation constraint, encouraging our derived deep generative model to produce various patterns of MNIST-like digits. Using CIFAR-10(4K) as an example, a semisupervised EVAE with an imposed regularisation learning constraint was able to achieve competitive discriminative performance on the classification benchmark, even in the face of state-of-the-art semisupervised learning approaches.
机译:学习所观察到的数据的潜在代表可以赞成歧视和生成任务的人工智能(AI)研究中仍然是一个具有挑战性的任务。以前从鉴别和生成模型的凸起绑定到半熟地学习范式的尝试都可能几乎不会对生成和歧视任务产生最佳性能。为此,在这项研究中,我们利用了两个神经科学灵感的学习限制的力量,即依赖最小化和正则化限制,提高深生成模型的生成和鉴别建模性能。为了展示这些学习限制的使用,我们介绍了一种新颖的深度生成模型:封装的变形自动化器(EVAES)与他们的学习算法一起堆叠两个不同的变形Autiachoders。使用MNIST DIATASET作为演示,通过强加的依赖性最小化约束改善了EVAES的生成建模性能,鼓励我们衍生的深度生成模型,以产生各种模式的MNIST型图案。使用CiFAR-10(4K)作为示例,具有强加正则化学习限制的半质象,即使面对最先进的半熟的学习方法,也能够在分类基准上实现竞争的歧视性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号