首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Maximum Relevance Minimum Redundancy Dropout with Informative Kernel Determinantal Point Process
【2h】

Maximum Relevance Minimum Redundancy Dropout with Informative Kernel Determinantal Point Process

机译:具有信息丰富的内核确定点过程的最大相关性最小冗余丢失

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In recent years, deep neural networks have shown significant progress in computer vision due to their large generalization capacity; however, the overfitting problem ubiquitously threatens the learning process of these highly nonlinear architectures. Dropout is a recent solution to mitigate overfitting that has witnessed significant success in various classification applications. Recently, many efforts have been made to improve the Standard dropout using an unsupervised merit-based semantic selection of neurons in the latent space. However, these studies do not consider the task-relevant information quality and quantity and the diversity of the latent kernels. To solve the challenge of dropping less informative neurons in deep learning, we propose an efficient end-to-end dropout algorithm that selects the most informative neurons with the highest correlation with the target output considering the sparsity in its selection procedure. First, to promote activation diversity, we devise an approach to select the most diverse set of neurons by making use of determinantal point process (DPP) sampling. Furthermore, to incorporate task specificity into deep latent features, a mutual information (MI)-based merit function is developed. Leveraging the proposed MI with DPP sampling, we introduce the novel DPPMI dropout that adaptively adjusts the retention rate of neurons based on their contribution to the neural network task. Empirical studies on real-world classification benchmarks including, MNIST, SVHN, CIFAR10, CIFAR100, demonstrate the superiority of our proposed method over recent state-of-the-art dropout algorithms in the literature.
机译:近年来,由于其概括能力大,深度神经网络在计算机视觉中表现出显着进展;但是,过度威胁过的过度问题威胁着这些高度非线性架构的学习过程。辍学是最近的解决方案,可以减轻过度装备,以目睹了各种分类应用中的重大成功。最近,已经使用许多努力来改善标准辍学,使用潜伏空间中的神经元的神经元的基于无监督的基于优异选择。然而,这些研究不考虑任务相关的信息质量和数量和潜在内核的多样性。为了解决深入学习中减少信息性神经元的挑战,我们提出了一种有效的端到端辍学算法,其选择具有最高相关性与目标输出的最高相关性,考虑其选择过程中的稀疏性。首先,为了促进激活多样性,我们通过利用确定诱导点过程(DPP)采样来设计一种方法来选择最多样化的神经元。此外,为了将任务特异性结合到深度潜在的特征中,开发了相互信息(MI)的基于优点函数。利用DPP采样利用所提出的MI,我们介绍了新的DPPMI辍学,以根据其对神经网络任务的贡献,自适应地调整神经元的保留率。关于现实世界分类基准的实证研究,包括MNIST,SVHN,CIFAR10,CIFAR100,展示了我们提出的方法在文献中最近的最新辍学算法的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号