首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Generalization Bounds of ERM-Based Learning Processes for Continuous-Time Markov Chains
【24h】

Generalization Bounds of ERM-Based Learning Processes for Continuous-Time Markov Chains

机译:基于ERM的连续时间马尔可夫链学习过程的广义界

获取原文
获取原文并翻译 | 示例

摘要

Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.
机译:统计学习理论上的许多现有结果都是基于这样的假设,即样本是独立且均匀分布的(即i.d.)。但是,i.i.d。的假设样本不适合在实际应用中解决样本与时间有关的问题。在本文中,我们主要关注基于经验风险最小化(ERM)的学习过程,该学习过程是从连续时间马尔可夫链中抽取的时间相关样本的。该学习过程涵盖许多实际应用,例如,时间序列的预测和信道状态信息的估计。因此,研究其理论性质包括泛化界,渐近收敛性和收敛速度具有重要意义。值得注意的是,由于样本在此学习过程中与时间有关,因此无法通过样本i.d.开发的现有方法来(至少直接)解决本文所关注的问题。假设。我们首先为从连续时间马尔可夫链中提取的时间相关样本序列建立偏差不等式,并给出此类序列的对称不等式。通过使用由此产生的偏差不等式和对称不等式,我们为连续时间马尔可夫链中的时间相关样本获得了基于ERM的学习过程的广义界。最后,基于所得的广义边界,我们分析学习过程的渐近收敛性和收敛速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号