首页> 外文期刊>Neural Networks, IEEE Transactions on >$ell_{p}-ell_{q}$ Penalty for Sparse Linear and Sparse Multiple Kernel Multitask Learning
【24h】

$ell_{p}-ell_{q}$ Penalty for Sparse Linear and Sparse Multiple Kernel Multitask Learning

机译:$ ell_ {p} -ell_ {q} $稀疏线性和稀疏多核多任务学习的惩罚

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, there has been much interest around multitask learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on $ell_{p}-ell_{q}$ (with $0leq pleq 1$ and $1leq qleq 2$) mixed norms as sparsity-inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the $ell_{1}-ell_{q}$ penalty which helps us in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the $ell_{1}-ell_{q}$ penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the $ell_{1}-ell_{q}$ proximal operator. Then, for the more general case, when $0<1$, we solve the resulting nonconvex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted $ell_{1}-ell_{q}$ sparse MTL problem. Em-n-npirical evidences from toy dataset and real-word datasets dealing with brain–computer interface single-trial electroencephalogram classification and protein subcellular localization show the benefit of the proposed approaches and algorithms.
机译:最近,人们对多任务学习(MTL)问题产生了浓厚的兴趣,其约束是任务应共享一个公共的稀疏度配置文件。可以通过正则化框架解决此类问题,其中正则化器会在任务决策功能之间引入联合稀疏模式。我们遵循这个原则性框架,并专注于$ ell_ {p} -ell_ {q} $($ 0leq pleq 1 $和$ 1leq qleq 2 $)混合规范作为稀疏性惩罚。我们解决这类更大的惩罚的动机是使惩罚适应当前的问题,从而导致更好的性能和更好的稀疏模式。为了解决一般多核情况下的问题,我们首先导出$ ell_ {1} -ell_ {q} $罚分的变分形式,这有助于我们提出一种替代的优化算法。尽管非常简单,但后一种算法可证明收敛到$ ell_ {1} -ell_ {q} $惩罚问题的全局最小值。对于线性情况,我们将考虑加速近端梯度的现有工作扩展到此惩罚。在这种情况下,我们的贡献是提供一种有效的方案来计算$ ell_ {1} -ell_ {q} $近端运算符。然后,对于更一般的情况,当$ 0 <1 $时,我们通过主化-最小化方法解决了由此产生的非凸问题。生成的算法是一个迭代方案,该迭代方案在每次迭代时解决加权的$ ell_ {1} -ell_ {q} $稀疏MTL问题。来自玩具数据集和真实单词数据集的关于脑机接口单次脑电图分类和蛋白质亚细胞定位的Em-n-npirical证据显示了所提出的方法和算法的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号