...
首页> 外文期刊>Kybernetes: The International Journal of Systems & Cybernetics >Some theoretical results of learning theory based on random sets in set-valued probability space
【24h】

Some theoretical results of learning theory based on random sets in set-valued probability space

机译:集值概率空间中基于随机集的学习理论的一些理论结果

获取原文
获取原文并翻译 | 示例
           

摘要

Purpose - The purpose of this paper is to introduce some basic knowledge of statistical learning theory (SLT) based on random set samples in set-valued probability space for the first time and generalize the key theorem and bounds on the rate of uniform convergence of learning theory in Vapnik, to the key theorem and bounds on the rate of uniform convergence for random sets in set-valued probability space. SLT based on random samples formed in probability space is considered, at present, as one of the fundamental theories about small samples statistical learning. It has become a novel and important field of machine learning, along with other concepts and architectures such as neural networks. However, the theory hardly handles statistical learning problems for samples that involve random set samples. Design/methodology/approach - Being motivated by some applications, in this paper a SLT is developed based on random set samples. First, a certain law of large numbers for random sets is proved. Second, the definitions of the distribution function and the expectation of random sets are introduced, and the concepts of the expected risk functional and the empirical risk functional are discussed. Anotion of the strict consistency of the principle of empirical risk minimization is presented. Findings - The paper formulates and proves the key theorem and presents the bounds on the rate of uniform convergence of learning theory based on random sets in set-valued probability space, which become cornerstones of the theoretical fundamentals of the SLT for random set samples. Originality/value - The paper provides a studied analysis of some theoretical results of learning theory.
机译:目的-本文的目的是首次介绍基于集合值概率空间中的随机集合样本的统计学习理论(SLT)的一些基础知识,并概括关于学习的统一收敛速度的关键定理和界限以Vapnik的理论为基础,研究了关键定理和集值概率空间中随机集的一致收敛速度的界线。目前,基于概率空间中形成的随机样本的SLT被视为有关小样本统计学习的基础理论之一。与其他概念和体系结构(例如神经网络)一样,它已成为机器学习的一个新的重要领域。但是,该理论很难处理涉及随机集样本的样本的统计学习问题。设计/方法/方法-受某些应用程序的启发,本文基于随机集样本开发了SLT。首先,证明了一定数量的随机集定律。其次,介绍了分布函数的定义和对随机集的期望,并讨论了预期风险函数和经验风险函数的概念。提出了经验风险最小化原则的严格一致性的概念。发现-本文提出并证明了关键定理,并提出了在集值概率空间中基于随机集的学习理论的均匀收敛速度的界限,这成为随机集样本SLT理论基础的基石。原创性/价值-本文对学习理论的一些理论结果进行了研究分析。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号