【24h】

Locally Linear Embedding versus Isotop

机译:局部线性嵌入与Isotop

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Recently, a new method intended to realize conformal mappings has been published. Called Locally Linear Embedding (LLE), this method can map high-dimensional data lying on a manifold to a representation of lower dimensionality that preserves the angles. Although LLE is claimed to solve problems that are usually managed by neural networks like Kohonen's Self-Organizing Maps (SOMs), the method reduces to an elegant eigenproblem with desirable properties (no parameter tuning, no local minima, etc.). The purpose of this paper consists in comparing the capabilities of LLE with a newly developed neural method called Isotop and based on ideas like neighborhood preservation, which has been the key of the SOMs' success. To illustrate the differences between the algebraic and the neural approach, LLE and Isotop are first briefly described and then compared with well known dimensionality reduction problems.
机译:最近,已经发布了旨在实现保形映射的新方法。这种方法称为局部线性嵌入(LLE),可以将流形上的高维数据映射到保留角度的较低维表示。尽管据称LLE可解决通常由Kohonen的自组织映射(SOM)等神经网络管理的问题,但该方法可简化为具有所需属性(无参数调整,无局部最小值等)的优雅特征问题。本文的目的在于将LLE的功能与最新开发的称为Isotop的神经方法进行比较,并基于邻域保存等思想,这已成为SOM成功的关键。为了说明代数方法与神经方法之间的差异,首先简要介绍了LLE和Isotop,然后与众所周知的降维问题进行了比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号