A new technique for projecting highdimensional data to low-dimensional spaces, called locally linear embedding (LLE), has recently been introduced. LLE offers many benefits over traditional alternatives, such as principal component analysis (PCA) and multi-dimensional scaling (MDS). In this paper, we generalize LLE to use Mercer kernels, resulting in a method we call KLLE. Mercer kernels have recently become very popular, due in large part to many recent successes in applying kernel methods such as support vector machines (SVMs) and kernel PCA to many real world problems. KLLE provides a powerful new tool for visualizing how Mercer kernels (implicitly) project data from input space to kernel feature space, which is an open and critical issue for better understanding how kernel methods work and how to best apply them.
展开▼