首页> 中文期刊> 《西安交通大学学报》 >采用遗传算法的分层贪婪字典训练算法

采用遗传算法的分层贪婪字典训练算法

             

摘要

针对稀疏表示残差过大的问题,提出了采用遗传算法的分层贪婪字典训练算法.该算法首先将数据样本变成一维信号,然后将问题划分为若干个子问题,采用贪婪算法思想分层训练字典.为了以一定概率寻找到每一层字典的最优值,使用遗传算法来训练每一层字典,最后将每层字典级联作为最终的字典.在训练每一层字典时,先采用号码矩阵对样本的分类进行表示,然后以平均低秩逼近的残差能量作为衡量适应度的参数,以联赛选择的方式选出优胜个体,通过单点交叉和变异方法产生新的个体.对二值序列的稀疏表示信号重建的实验结果表明,该算法在训练样本量较小的情况下,与传统的核奇异值分解算法相比,训练得到的字典在同样的稀疏度约束下重建信噪比提高了10倍以上.%A greedy layer-wise dictionary training algorithm based on genetic approach is presented to solve the problem of too large residual in sparse representation. The algorithm firstly changes the data samples to one-dimension vectors. Then the greedy layer-wise dictionary training algorithm is employed to separate the dictionary training problem into several sub-problems. When the proposed algorithm is used to train every dictionary layer, and a genetic approach is used to find the optimal solution in every dictionary layer with a high probability. Finally, each dictionary layer is connected to get the final dictionary. When one dictionary layer is trained, matrices with numbers are used to describe the classes. Then, the average residual energy of low rank approximations is used as a measure of fitness, and Winners are selected by matching. New individuals are generated by single point crossover and mutation. Experiments about the sparse representation of the short binary sequences show that the reconstruction SNR of the proposed algorithm is 10 or more times higher than that of traditional kernel singular value decomposition algorithms under the same sparsity constraint when the number of training data samples is small.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号