首页> 外文学位 >Kernel multi-task latent analysis.
【24h】

Kernel multi-task latent analysis.

机译:内核多任务潜在分析。

获取原文
获取原文并翻译 | 示例

摘要

In this thesis, we develop a Multi-task Latent Analysis (MLA) approach and its nonlinear version Kernel Multi-task Latent Analysis (KMLA) for modelling many-to-many relationships between inputs and responses. KMLA performs dimensionality reduction targeted towards a multi-task loss function like (Kernel) Partial Least Squares (KPLS). KMLA is a more general approach that can utilize the widely-used convex loss functions for inference tasks and it still keeps a lot of good properties of KPLS. KMLA achieves inductive transfer between tasks by forcing the tasks to share same latent variables (LV).;KMLA is an efficient multi-task method. For a given dataset and loss function, KMLA produces linear or nonlinear features for all the tasks that are suitable for data visualization, dimensionality reduction, and generalization. KMLA has finite convergence to the optimal solution of the original problem. Stopping with a small number of LV yields models with good generalization. The KMLA framework can be used to achieve inductive transfer between tasks, allowing more accurate models to be produced with less data.;We apply KMLA to chromatography problems and perform experiments on two datasets; one is protein cation-exchange chromatography, and the other is displacer anion-exchange chromatography. In linear MLA, we model all the tasks simultaneously in order to improve the overall generalization. We compare predicting task one-by-one and predicting all tasks for different regression losses. When the responses are highly correlated, predicting all is better than one-by-one. We extend nonlinear KMLA to semi-supervised learning. The target is one response, and other responses are related tasks. The input data are the same for every response, and responses may have unlabelled information. The goal is to use related tasks and known parts of the target to predict the unlabelled parts of the target. Modelling on multi-response simultaneously with unlabelled information on the target using KMLA can perform better than modelling each response with unknown information individually. The paired ranking loss is also introduced. Results demonstrate that multi-mode is better than single-mode for regression and ranking.
机译:在本文中,我们开发了一种多任务潜在分析(MLA)方法及其非线性版本的内核多任务潜在分析(KMLA),用于对输入和响应之间的多对多关系进行建模。 KMLA针对诸如(内核)偏最小二乘(KPLS)之类的多任务丢失功能执行降维。 KMLA是一种更通用的方法,可以将广泛使用的凸损失函数用于推理任务,并且仍然保留了KPLS的许多优良特性。 KMLA通过强制任务共享相同的潜在变量(LV)来实现任务之间的归纳传输。KMLA是一种有效的多任务方法。对于给定的数据集和损失函数,KMLA为适合数据可视化,降维和泛化的所有任务生成线性或非线性特征。 KMLA对于原始问题的最优解具有有限收敛性。停止使用少量LV即可得出具有良好概括性的模型。 KMLA框架可用于实现任务之间的归纳转换,从而可以使用更少的数据生成更准确的模型。我们将KMLA应用于色谱问题,并在两个数据集上进行了实验;一种是蛋白质阳离子交换色谱,另一种是置换阴离子交换色谱。在线性MLA中,我们同时对所有任务建模,以提高整体概括性。我们将预测任务一一比较,并针对不同的回归损失预测所有任务。当这些响应高度相关时,预测所有结果都将比一个一个更好。我们将非线性KMLA扩展到半监督学习。目标是一个响应,其他响应是相关任务。每个响应的输入数据都相同,并且响应可能具有未标记的信息。目标是使用相关任务和目标的已知部分来预测目标的未标记部分。使用KMLA在目标上具有未标记信息的同时对多响应建模,比对每个未知信息建模的响应要好。还介绍了配对排名损失。结果表明,对于回归和排名,多模式优于单模式。

著录项

  • 作者

    Xiang, Zhi.;

  • 作者单位

    Rensselaer Polytechnic Institute.;

  • 授予单位 Rensselaer Polytechnic Institute.;
  • 学科 Mathematics.
  • 学位 Ph.D.
  • 年度 2005
  • 页码 95 p.
  • 总页数 95
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号