首页> 外文学位 >l(1)-norm sparse Bayesian learning: Theory and applications.
【24h】

l(1)-norm sparse Bayesian learning: Theory and applications.

机译:l(1)-范式稀疏贝叶斯学习:理论与应用。

获取原文
获取原文并翻译 | 示例

摘要

The elements in the real world are often sparsely connected. For example, in a social network, each individual is only sparsely connected to a small portion of people in the network; for certain disease (like breast cancer), even though human have tens of thousands of genes, only a small number of them are connected to the disease; for a filter modeling an acoustic room impulse response, only a small portion of filter coefficients are nonzero. Discovering the sparse representations of the real world is important since they provide not only the neatest insight for understanding the world but also the most efficient way for changing the world. Therefore, finding sparse representations has attracted a great amount of research effort in the past decade, and it has been a driving force for many exciting new fields, such as sparse coding and compressive sampling.; The research effort on finding sparse representations has covered both theories and applications. In its theoretic aspects, researchers have developed many approaches (such as nonnegative constraint, l1 -norm sparsity regularization and sparse Bayesian learning with independent Gaussian prior) for encouraging sparse solutions and established some conditions under which the true solutions (which are sparse) could be found by those approaches. Meanwhile, finding sparse representations has found its applications in a wide spectrum of fields such as acoustic/image signal processing, computer vision, natural language processing, bioinformatics, finance modeling, and so on.; However, despite of the intense studies in finding sparse solutions in the last decade, there is a fundamental issue still remained almost untouched, that is, how sparse is the optimally sparse in representing given data?; This thesis aims to answer the above fundamental question by establishing a theory of l1-norm sparse Bayesian learning . In particular, using l1-norm regularized least squares as an example, we show how the l1-norm sparse Bayesian learning extends the conventional uniform l 1-norm sparsity regularization, where all variables desired to be sparse share a single scalar regularization parameter, to independent l1-norm sparsity regularization, where each variable is associated with an independent regularization parameter. In the independent l1-norm sparsity regularization, the optimal sparseness of solutions is then fully defined in a Bayesian sense via the optimal l1-norm sparsity regularization parameters and inferred by learning directly from data. This is why we call our Bayesian approach sparse learning, which is very different from conventional methods where there is only single l1-norm regularization parameter and it is determined by ad-hoc manners (like cross-validation).; The proposed l1-norm sparse Bayesian learning shows superior performance in both simulations and real examples. Our simulation results demonstrate that the l1-norm sparse Bayesian learning is able to accurately resolve the true sparseness in solutions even in very noisy data, and it provides better performance than the conventional uniform l1-norm regularization and l 2-norm Bayesian sparse learning (also known as relevance vector machine). In real examples, we show the l1-norm sparse Bayesian learning is effective for speech dereverberation and acoustic time different of arrival (TDOA) estimation in reverberant environments, both of which are hard problems and have remained open problems after a long history of research.
机译:现实世界中的元素通常稀疏连接。例如,在社交网络中,每个人仅稀疏地连接到网络中一小部分人;对于某些疾病(如乳腺癌),即使人类有成千上万的基因,也只有少数与疾病有关。对于模拟声学房间脉冲响应的滤波器,只有一小部分滤波器系数为非零。发现现实世界的稀疏表示非常重要,因为它们不仅提供了了解世界的最新见解,而且提供了改变世界的最有效方法。因此,在过去的十年中,寻找稀疏表示法吸引了大量的研究工作,并且一直是稀疏编码和压缩采样等许多令人兴奋的新领域的驱动力。寻找稀疏表示的研究工作涵盖了理论和应用。在理论上,研究人员开发了许多方法(例如非负约束,l1范数稀疏正则化和具有独立高斯先验的稀疏贝叶斯学习)来鼓励稀疏解,并建立了一些条件,在此条件下可以得到真实解(稀疏)通过这些方法发现的。同时,发现稀疏表示已在广泛的领域中得到了应用,例如声音/图像信号处理,计算机视觉,自然语言处理,生物信息学,金融建模等。然而,尽管在过去的十年中进行了大量的研究来寻找稀疏解,但是仍然存在一个基本问题,那就是,在表示给定数据时,最佳稀疏是如何稀疏的?本文旨在通过建立l1-范数稀疏贝叶斯学习理论来回答上述基本问题。特别是,以l1范数正则化最小二乘为例,我们展示了l1范数稀疏贝叶斯学习如何将常规的均匀l 1范数稀疏正则化扩展到期望的所有稀疏变量共享一个标量正则化参数,独立的l1范数稀疏正则化,其中每个变量都与一个独立的正则化参数相关联。然后,在独立的l1范数稀疏性正则化中,通过最优的l1范数稀疏性正则化参数在贝叶斯意义上完全定义解的最优稀疏性,并直接从数据中学习得出。这就是为什么我们将我们的贝叶斯方法称为稀疏学习,这与传统方法非常不同,传统方法只有一个l1-norm正则化参数,并且是通过即席方式确定的(例如交叉验证)。所提出的l1-范数稀疏贝叶斯学习在仿真和实际示例中均显示出优异的性能。我们的仿真结果表明,即使在非常嘈杂的数据中,l1-范数稀疏贝叶斯学习也能够准确解决解决方案中的真正稀疏问题,并且比常规的统一l1-norm正则化和l2-范数贝叶斯稀疏学习(也称为关联向量机)。在真实的例子中,我们证明了l1-范数稀疏贝叶斯学习对于混响环境中的语音去混响和到达时间的听觉差异(TDOA)估计是有效的,这两者都是难题,并且经过长期的研究仍是未解决的问题。

著录项

  • 作者

    Lin, Yuanqing.;

  • 作者单位

    University of Pennsylvania.;

  • 授予单位 University of Pennsylvania.;
  • 学科 Engineering Electronics and Electrical.; Artificial Intelligence.
  • 学位 Ph.D.
  • 年度 2008
  • 页码 111 p.
  • 总页数 111
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号