首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Multiplicative Update Rules for Concurrent Nonnegative Matrix Factorization and Maximum Margin Classification
【24h】

Multiplicative Update Rules for Concurrent Nonnegative Matrix Factorization and Maximum Margin Classification

机译:并行非负矩阵分解和最大余量分类的乘法更新规则

获取原文
获取原文并翻译 | 示例

摘要

The state-of-the-art classification methods which employ nonnegative matrix factorization (NMF) employ two consecutive independent steps. The first one performs data transformation (dimensionality reduction) and the second one classifies the transformed data using classification methods, such as nearest neighbor/centroid or support vector machines (SVMs). In the following, we focus on using NMF factorization followed by SVM classification. Typically, the parameters of these two steps, e.g., the NMF bases/coefficients and the support vectors, are optimized independently, thus leading to suboptimal classification performance. In this paper, we merge these two steps into one by incorporating maximum margin classification constraints into the standard NMF optimization. The notion behind the proposed framework is to perform NMF, while ensuring that the margin between the projected data of the two classes is maximal. The concurrent NMF factorization and support vector optimization are performed through a set of multiplicative update rules. In the same context, the maximum margin classification constraints are imposed on the NMF problem with additional discriminant constraints and respective multiplicative update rules are extracted. The impact of the maximum margin classification constraints on the NMF factorization problem is addressed in Section VI. Experimental results in several databases indicate that the incorporation of the maximum margin classification constraints into the NMF and discriminant NMF objective functions improves the accuracy of the classification.
机译:采用非负矩阵分解(NMF)的最新分类方法采用两个连续的独立步骤。第一个执行数据转换(降维),第二个使用分类方法对转换后的数据进行分类,例如最近邻/质心或支持向量机(SVM)。在下文中,我们着重于使用NMF分解和SVM分类。通常,这两个步骤的参数(例如NMF基数/系数和支持向量)是独立优化的,因此导致分类性能不佳。在本文中,我们通过将最大边距分类约束合并到标准NMF优化中,将这两个步骤合并为一个步骤。提议的框架背后的概念是执行NMF,同时确保两个类别的投影数据之间的裕量最大。并发NMF分解和支持向量优化是通过一组乘法更新规则执行的。在相同的上下文中,将最大边际分类约束与其他判别约束一起施加于NMF问题,并提取相应的乘法更新规则。第六节讨论了最大边际分类约束对NMF因式分解问题的影响。在几个数据库中的实验结果表明,将最大边距分类约束合并到NMF和判别NMF目标函数中可提高分类的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号