首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Learning Invariant Representations with Kernel Warping
【24h】

Learning Invariant Representations with Kernel Warping

机译:通过内核变形学习不变表示

获取原文
           

摘要

Invariance is an effective prior that has been extensively used to bias supervised learning with a emph{given} representation of data. In order to learn invariant representations, wavelet and scattering based methods “hard code” invariance over the emph{entire} sample space, hence restricted to a limited range of transformations. Kernels based on Haar integration also work only on a emph{group} of transformations. In this work, we break this limitation by designing a new representation learning algorithm that incorporates invariances emph{beyond transformation}. Our approach, which is based on warping the kernel in a data-dependent fashion, is computationally efficient using random features, and leads to a deep kernel through multiple layers. We apply it to convolutional kernel networks and demonstrate its stability.
机译:不变性是一个有效的先验,已被广泛用于通过 emph {given}数据表示来偏向监督学习。为了学习不变表示,基于小波和散射的方法在 emph {entire}样本空间上“硬编码”不变,因此受限于有限的变换范围。基于Haar集成的内核也仅适用于 emph {group}转换。在这项工作中,我们通过设计一种新的表示学习算法来打破这一局限,该算法结合了不变性 emph {超越变换}。我们的方法基于以依赖于数据的方式扭曲内核,它使用随机特征在计算上是有效的,并且可以通过多层生成更深的内核。我们将其应用于卷积内核网络并证明其稳定性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号