首页> 外文期刊>Selected Topics in Signal Processing, IEEE Journal of >Introduction to the Issue on Robust Subspace Learning and Tracking: Theory, Algorithms, and Applications
【24h】

Introduction to the Issue on Robust Subspace Learning and Tracking: Theory, Algorithms, and Applications

机译:鲁棒子空间学习和跟踪的问题简介:理论,算法和应用

获取原文
获取原文并翻译 | 示例
       

摘要

The papers in this special section focus on robust subspace learning and tracking. Subspace learning theory for dimensionality reduction was initiated with the Principal Component Analysis (PCA) formulation proposed by Pearson in 1901. PCA was first widely used for data analysis in the field of psychometrics and chemometrics but today it is often the first step in more various types of exploratory data analysis, predictive modeling, classification and clustering problems. It finds modern applications in signal processing, biomedical imaging, computer vision, process fault detection, recommendation system design and many more domains. Since one century, numerous other subspace learning models, either reconstructive and discriminative, were developed over time in literature to address dimensionality reduction while keeping the relevant information in a different manner from PCA. However, PCA can also be viewed as a soft clustering method that seeks to find clusters in different subspaces within a dataset, and numerous clustering methods are based on dimensionality reduction. These methods are called subspace clustering methods that are extension of traditional PCA based clustering, and divide data points belonging to the union of subspaces (UoS) into the respective subspaces. In several modern applications, the main limitation of the subspace learning and clustering models are their sensitivity to outliers. Thus, further developments concern robust subspace learning which refers to the problem of subspace learning in the presence of outliers. In fact, even the classical subspace learning problem with speed or memory constraints is not a solved problem.
机译:本节中的论文集中于强大的子空间学习和跟踪。降维的子空间学习理论是由皮尔森(Pearson)在1901年提出的主成分分析(PCA)公式发起的。PCA首先广泛用于心理计量学和化学计量学领域的数据分析,但如今,它通常是更多类型的第一步探索性数据分析,预测建模,分类和聚类问题。它在信号处理,生物医学成像,计算机视觉,过程故障检测,推荐系统设计以及更多领域中找到了现代应用。自一个世纪以来,随着时间的推移,在文献中发展了许多其他的子空间学习模型,无论是重构的还是区分性的,以解决降维问题,同时以与PCA不同的方式保留相关信息。但是,PCA也可以看作是一种软聚类方法,试图在数据集中的不同子空间中找到聚类,并且许多聚类方法都基于降维。这些方法称为子空间聚类方法,是传统基于PCA的聚类的扩展,并将属于子空间并集(UoS)的数据点划分为各个子空间。在一些现代应用中,子空间学习和聚类模型的主要限制是它们对异常值的敏感性。因此,进一步的发展涉及鲁棒的子空间学习,鲁棒的子空间学习是指存在异常值时的子空间学习问题。实际上,即使是具有速度或内存限制的经典子空间学习问题也不是解决的问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号