首页> 美国卫生研究院文献>Philosophical transactions. Series A Mathematical physical and engineering sciences >Blessing of dimensionality: mathematical foundations of the statistical physics of data
【2h】

Blessing of dimensionality: mathematical foundations of the statistical physics of data

机译:维的祝福:数据统计物理学的数学基础

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher’s discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.This article is part of the theme issue ‘Hilbert’s sixth problem’.
机译:在二十世纪十九世纪末期开始,测量现象的集中被发现作为统计力学的数学背景,此后一直在数学中进行探索。在二十一世纪初,很明显,机器学习中对这些现象的正确利用可能将维数的诅咒转化为维数的祝福。本文总结了最近发现的量度集中现象,该现象极大地简化了高维中的一些机器学习问题,并使我们能够纠正传统的人工智能系统。度量定理的经典集中说明即随机点集中在表面附近的薄层中(一个球体或一个球体的赤道,平均或中位能级集或另一个Lipschitz函数等)。新的随机分离定理描述了这些薄层的薄结构:随机点不仅集中在薄层中,而且与其余集合线性地分离,即使对于指数较大的随机集合也是如此。可以采用费舍尔线性判别式的形式来选择用于点分离的线性函数。所有人工智能系统都会出错。非破坏性校正要求通过简单而强大的分类器将具有错误的情况(样本)与对应于正确行为的样本分离。随机分离定理为我们提供了这样的分类器,并确定了它们的构造的非迭代(单次)过程。本文是主题问题“希尔伯特的第六个问题”的一部分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号