首页> 外文学位 >Quadratic Discriminant Analysis Revisited.
【24h】

Quadratic Discriminant Analysis Revisited.

机译:再谈二次判别分析。

获取原文
获取原文并翻译 | 示例

摘要

In this thesis, we revisit quadratic discriminant analysis (QDA), a standard classification method. Specifically, we investigate the parameter estimation and dimension reduction problems for QDA.;Traditionally, the parameters of QDA are estimated generatively; that is the parameters are estimated by maximizing the joint likelihood of observations and their labels. In practice, classical QDA, though computationally efficient, often underperforms discriminative classifiers, such as SVM, Boosting methods, and logistic regression. Motivated by recent research on.;hybrid generative/discriminative learning, we propose to estimate the parameters of QDA by minimizing a convex combination of negative joint log-likelihood and negative conditional log-likelihood of observations and their labels. For this purpose, we propose an iterative majorize-minimize (MM) algorithm for classifiers of which conditional distributions are from the exponential family; in each iteration of the MM algorithm, a convex optimization problem needs to be solved. To solve the convex problem specially derived for QDA, we propose a block-coordinate descent algorithm that sequentially updates the parameters of QDA; in each update, we present a trust region method for solving optimal estimations, of which we have closed form solutions in each iteration. Numerical experiments show: 1) the hybrid approach to QDA is competitive with, and in some cases significant better than other approaches to QDA, SVM with polynomial kernel (;Dimension reduction methods are commonly used to extract more compact features in the hope to build more efficient and possibly more robust classifiers. It is well known that Fisher's discriminant analysis generates optimal lower dimensional features for linear discriminant analysis. However, ``ldots for QDA, where so far there has been no universally accepted dimension-reduction technique in the literature'', though considerable efforts have been made. To construct a dimension reduction method for QDA, we generalize the Fukunaga-Koontz transformation, and propose novel affine feature extraction (AFE) methods for binary QDA. The proposed AFE methods have closed-form solutions and thus can be solved efficiently. We show that 1) the AFE methods have desired geometrical, statistical and information-theoretical properties; and 2) the AFE methods generalize dimension reduction methods for LDA and QDA with equal means. Numerical experiments show that the new proposed AFE method is competitive with, and in some cases significantly better than some commonly used linear dimension reduction techniques for QDA in the literature.
机译:在本文中,我们将重新探讨标准判别方法二次判别分析(QDA)。具体来说,我们研究了QDA的参数估计和降维问题。通过最大化观测值及其标签的联合可能性来估计参数。在实践中,传统的QDA尽管计算效率高,但通常不如可区分的分类器(如SVM,Boosting方法和逻辑回归)那样好。受最近关于混合生成/判别学习的研究的启发,我们建议通过最小化观测值及其标记的负联合对数似然和负条件对数似然的凸组合来估计QDA的参数。为此,我们为条件量为指数族的分类器提出了一种迭代主次最小化(MM)算法。在MM算法的每次迭代中,都需要解决凸优化问题。为了解决专门为QDA导出的凸问题,我们提出了一种块坐标下降算法,该算法顺序更新QDA的参数。在每次更新中,我们都提出了一种用于求解最佳估计的信任域方法,在每次迭代中我们都采用封闭形式的解决方案。数值实验表明:1)混合QDA的方法具有竞争优势,并且在某些情况下要比其他方法具有多项式内核的SVM具有更好的性能(;降维方法通常用于提取更紧凑的特征,以期构建更多的特征)。众所周知,Fisher的判别分析会生成用于线性判别分析的最佳低维特征。但是,“ QDA的点数,到目前为止,在文献中还没有普遍接受的降维技术” ”,尽管已经做出了相当大的努力。为构造QDA的降维方法,我们对Fukunaga-Koontz变换进行了概括,并提出了针对二进制QDA的新颖仿射特征提取(AFE)方法。结果表明:1)AFE方法具有理想的几何,统计和信息理论支持; 2)AFE方法用相同的方法推广了LDA和QDA的降维方法。数值实验表明,新提出的AFE方法具有竞争力,在某些情况下明显优于文献中一些常用的线性降维技术。

著录项

  • 作者

    Cao, Wenbo.;

  • 作者单位

    City University of New York.;

  • 授予单位 City University of New York.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2015
  • 页码 210 p.
  • 总页数 210
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号