首页> 外文会议>Conference on uncertainty in artificial intelligence >High-dimensional Joint Sparsity Random Effects Model for Multi-task Learning
【24h】

High-dimensional Joint Sparsity Random Effects Model for Multi-task Learning

机译:多任务学习的高维联合稀疏随机效应模型

获取原文

摘要

Joint sparsity regularization in multi-task learning has attracted much attention in recent years. The traditional convex formulation employs the group Lasso relaxation to achieve joint sparsity across tasks. Although this approach leads to a simple convex formulation, it suffers from several issues due to the looseness of the relaxation. To remedy this problem, we view jointly sparse multi-task learning as a specialized random effects model, and derive a convex relaxation approach that involves two steps. The first step learns the covariance matrix of the coefficients using a convex formulation which we refer to as sparse covariance coding; the second step solves a ridge regression problem with a sparse quadratic regularizer based on the covariance matrix obtained in the first step. It is shown that this approach produces an asymptotically optimal quadratic regularizer in the multitask learning setting when the number of tasks approaches infinity. Experimental results demonstrate that the convex formulation obtained via the proposed model significantly outperforms group Lasso (and related multi-stage formulations).
机译:近年来,多任务学习中的联合稀疏正则化引起了很多关注。传统的凸公式采用组套索松弛来实现跨任务的联合稀疏。尽管这种方法导致了简单的凸形公式化,但是由于松弛的松散性,它遭受了一些问题。为了解决这个问题,我们将稀疏的多任务学习共同视为一种专门的随机效应模型,并推导了涉及两个步骤的凸松弛方法。第一步,使用凸公式学习系数的协方差矩阵,我们将其称为稀疏协方差编码。第二步基于第一步中获得的协方差矩阵,使用稀疏二次正则化函数解决了岭回归问题。结果表明,当任务数量接近无穷大时,该方法在多任务学习环境中产生了渐近最优的二次正则化函数。实验结果表明,通过提出的模型获得的凸公式明显优于Lasso组(和相关的多阶段公式)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号