...
首页> 外文期刊>Neural Networks: The Official Journal of the International Neural Network Society >Autonomous learning derived from experimental modeling of physical laws
【24h】

Autonomous learning derived from experimental modeling of physical laws

机译:从物理定律的实验模型得出的自主学习

获取原文
获取原文并翻译 | 示例

摘要

This article deals with experimental description of physical laws by probability density function of measured data. The Gaussian mixture model specified by representative data and related probabilities is utilized for this purpose. The information cost function of the model is described in terms of information entropy by the sum of the estimation error and redundancy. A new method is proposed for searching the minimum of the cost function. The number of the resulting prototype data depends on the accuracy of measurement. Their adaptation resembles a self-organized, highly non-linear cooperation between neurons in an artificial NN. A prototype datum corresponds to the memorized content, while the related probability corresponds to the excitability of the neuron. The method does not include any free parameters except objectively determined accuracy of the measurement system and is therefore convenient for autonomous execution. Since representative data are generally less numerous than the measured ones, the method is applicable for a rather general and objective compression of overwhelming experimental data in automatic data-acquisition systems. Such compression is demonstrated on analytically determined random noise and measured traffic flow data. The flow over a day is described by a vector of 24 components. The set of 365 vectors measured over one year is compressed by autonomous learning to just 4 representative vectors and related probabilities. These vectors represent the flow in normal working days and weekends or holidays, while the related probabilities correspond to relative frequencies of these days. This example reveals that autonomous learning yields a new basis for interpretation of representative data and the optimal model structure.
机译:本文通过实测数据的概率密度函数处理物理定律的实验描述。为此,使用了由代表性数据和相关概率指定的高斯混合模型。该模型的信息成本函数是根据信息熵通过估计误差和冗余之和来描述的。提出了一种寻找成本函数最小值的新方法。所得原型数据的数量取决于测量的准确性。它们的适应性类似于人工NN中神经元之间的自组织,高度非线性合作。原型数据对应于记忆的内容,而相关的概率对应于神经元的兴奋性。该方法除了客观确定测量系统的精度外,不包括任何自由参数,因此便于自主执行。由于代表性数据通常少于所测量的数据,因此该方法适用于自动数据采集系统中压倒性实验数据的相当普遍和客观的压缩。在分析确定的随机噪声和测得的交通流量数据上证明了这种压缩。一天的流量由24个分量的向量描述。通过自主学习将一年中测得的365个向量集压缩为仅4个代表性向量和相关概率。这些向量表示正常工作日以及周末或节假日的流量,而相关的概率对应于这些天的相对频率。这个例子表明,自主学习为解释代表性数据和最佳模型结构提供了新的基础。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号