首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning
【24h】

Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning

机译:跨任务概括:用于线性表示学习的高效算法

获取原文
           

摘要

We present provable algorithms for learning linear representations which are trained in a supervised fashion across a number of tasks. Furthermore, whereas previous methods in the context of multitask learning only allow for generalization within tasks that have already been observed, our representations are both efficiently learnable and accompanied by generalization guarantees to unseen tasks. Our method relies on a certain convex relaxation of a non-convex problem, making it amenable to online learning procedures. We further ensure that a low-rank representation is maintained, and we allow for various trade-offs between sample complexity and per-iteration cost, depending on the choice of algorithm.
机译:我们提出了可证明的算法来学习线性表示,这些线性表示在许多任务中以监督方式进行训练。此外,尽管先前在多任务学习中使用的方法仅允许在已观察到的任务中进行概括,但我们的表示既可有效学习,又可对未见任务进行概括保证。我们的方法依靠非凸问题的某种凸松弛,使其适合在线学习程序。我们进一步确保维持低等级表示,并允许根据算法选择在样本复杂度和每次迭代成本之间进行各种折衷。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号