首页> 外文学位 >Structure Learning in Locally Constant Gaussian Graphical Models.
【24h】

Structure Learning in Locally Constant Gaussian Graphical Models.

机译:局部恒定高斯图形模型中的结构学习。

获取原文
获取原文并翻译 | 示例

摘要

Occurrence of zero entries in the inverse covariance matrix of a multivariate Gaussian random variable has a one to one correspondence with conditional independence of corresponding pairs of components. A challenging aspect of sparse structure learning is the well known "small n large p" scenario. So far, several algorithms have been proposed to solve the problem. Neighborhood selection using lasso (Meinshausen- Buhlmann), block-coordinate descent algorithm to estimate the covariance matrix (Banerjee et al.), graphical lasso (Tibshirani et al.) are some of the most popular ones.;In first part of this thesis, an alternative methodology is proposed for Gaussian graphical models on manifolds where spatial information is judiciously incorporated into the estimation procedure. This is initiated by Honorio et al. (2009) who proposed an extension of the coordinate descent approach, calling it "coordinate direction descent approach", which incorporates the local constancy property of spatial neighbors. However, only an intuitive formalization is provided by Honorio et al. and no theoretical investigations. Here I propose an algorithm to deal with local geometry in Gaussian graphical models. The algorithm extended the Meinshausen-Buhlmann's idea of successive regression by a different penalty. Neighborhood information is used in the penalty term and it is called neighborhood-fused lasso algorithm. I will show by simulation and prove theoretically the asymptotic model selection consistency of the proposed method and will establish faster convergence to the ground truth than the standard rates if the assumption of local constancy holds. This modification has numerous practical application, e.g., in the analysis of MRI data, 2-dimensional spatial manifold data in order to study spatial aspects of the human brain or moving objects.;In second part of the thesis, I will discuss smoothing techniques on Riemannian manifolds using local information. Estimation of smoothed diffusion tensors from diffusion weighted magnetic resonance images (DW-MRI or DWI) of human brain is usually a two-step procedure, the first step being a regression (linear/non-linear) and the second step being a smoothing (isotropic/anisotropic). I extended the smoothing ideas on Euclidean space to non-Euclidean space by running a conjugate gradient algorithm on the manifold of positive definite matrices. This method shows empirical evidence of a better performance than the two-step method of smoothing. This is a collaborative work with Debashis Paul, Jie Peng and Owen Carmichael.
机译:多元高斯随机变量的逆协方差矩阵中零项的出现具有一对一的对应关系,且条件对独立于成对的分量。稀疏结构学习的一个挑战性方面是众所周知的“小n大p”场景。到目前为止,已经提出了几种算法来解决该问题。使用套索的邻域选择(Meinshausen-Buhlmann),估计协方差矩阵的块坐标下降算法(Banerjee等),图形套索(Tibshirani等)是最受欢迎的方法。 ,针对流形上的高斯图形模型,提出了一种替代方法,其中将空间信息明智地纳入了估计过程。这是由Hon​​orio等人发起的。 (2009年)提出了坐标下降法的扩展,称其为“坐标下降法”,其中包含了空间邻居的局部不变性。但是,Honorio等人仅提供了直观的形式化描述。而且没有理论研究。在这里,我提出了一种在高斯图形模型中处理局部几何的算法。该算法将Meinshausen-Buhlmann的连续回归思想扩展了一个不同的惩罚。惩罚信息中使用了邻域信息,它被称为邻域融合套索算法。我将通过仿真显示并从理论上证明所提出方法的渐进模型选择一致性,并且如果保持局部恒定性假设,则将比标准速率更快地收敛到地面真实性。这种修改具有许多实际应用,例如,在MRI数据分析,二维空间流形数据分析中,以研究人脑或运动对象的空间方面。黎曼流形使用局部信息。从人脑的扩散加权磁共振图像(DW-MRI或DWI)估计平滑的扩散张量通常是一个两步过程,第一步是回归(线性/非线性),第二步是平滑(各向同性/各向异性)。通过在正定矩阵的流形上运行共轭梯度算法,将欧几里德空间的平滑思想扩展到了非欧几里德空间。该方法显示出比两步平滑方法更好的性能的经验证据。这是与Debashis Paul,Jie Peng和Owen Carmichael的合作作品。

著录项

  • 作者

    Ganguly, Apratim.;

  • 作者单位

    University of California, Davis.;

  • 授予单位 University of California, Davis.;
  • 学科 Statistics.;Mathematics.
  • 学位 Ph.D.
  • 年度 2014
  • 页码 133 p.
  • 总页数 133
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号