首页> 外文期刊>Knowledge-Based Systems >Unsupervised feature selection via transformed auto-encoder
【24h】

Unsupervised feature selection via transformed auto-encoder

机译:通过转换的自动编码器选择无监督的功能选择

获取原文
获取原文并翻译 | 示例
           

摘要

As one of the fundamental research issues, feature selection plays a critical role in machine learning. By the removal of irrelevant features, it attempts to reduce computational complexities of upstream tasks, usually with computation accelerations and performance improvements. This paper proposes an auto-encoder based scheme for unsupervised feature selection. Due to the inherent consistency, this framework can solve traditional constrained feature selection problems approximately. Specifically, the proposed model takes non-negativity, orthogonality, and sparsity into account, whose internal characteristics are exploited sufficiently. It can also employ other loss functions and flexible activation functions. The former can fit a wide range of learning tasks, and the latter has the ability to play the role of regularization terms to impose regularization constraints on the model. Thereinafter, the proposed model is validated on multiple benchmark datasets, where various activation and loss functions are analyzed for finding better feature selectors. Finally, extensive experiments demonstrate the superiority of the proposed method against other compared state-of-the-arts. (C) 2021 Elsevier B.V. All rights reserved.
机译:作为基本研究问题之一,特征选择在机器学习中发挥着关键作用。通过去除无关的功能,它试图降低上游任务的计算复杂性,通常具有计算加速和性能改进。本文提出了一种基于自动编码器的无监督特征选择的方案。由于固有的一致性,此框架可以大致解决传统的受约束特征选择问题。具体地,所提出的模型考虑了非消极性,正交性和稀疏性,其内部特征充分利用。它还可以采用其他丢失功能和灵活的激活功能。前者可以符合广泛的学习任务,后者能够发挥正规化术语的作用,以对模型施加正则化限制。因此,在多个基准数据集上验证了所提出的模型,其中分析了各种激活和丢失功能以查找更好的特征选择器。最后,广泛的实验表明了拟议方法与其他最先进的方法的优势。 (c)2021 elestvier b.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号