首页> 外文会议>International Conference on Machine Learning >Discriminatively Activated Sparselets
【24h】

Discriminatively Activated Sparselets

机译:差异激活的开石

获取原文

摘要

Shared representations are highly appealing due to their potential for gains in computational and statistical efficiency. Compressing a shared representation leads to greater computational savings, but can also severely decrease performance on a target task. Recently, sparselets (Song et al., 2012) were introduced as a new shared intermediate representation for multiclass object detection with deformable part models (Felzenszwalb et al., 2010a), showing significant speedup factors, but with a large decrease in task performance. In this paper we describe a new training framework that learns which sparselets to activate in order to optimize a discriminative objective, leading to larger speedup factors with no decrease in task performance. We first reformulate sparselets in a general structured output prediction framework, then analyze when sparselets lead to computational efficiency gains, and lastly show experimental results on object detection and image classification tasks. Our experimental results demonstrate that discriminative activation substantially outperforms the previous reconstructive approach which, together with our structured output prediction formulation, make sparselets broadly applicable and significantly more effective.
机译:由于其计算和统计效率的收益潜力,共享表示非常有吸引力。压缩共享表示导致更大的计算节省,但也可能严重降低目标任务的性能。最近,SparareLets(Song等人,2012)被引入了具有可变形部件模型的多款物体检测的新共享中间代表(Felzenszwalb等,2010A),显示出显着的加速因素,但任务性能较大。在本文中,我们描述了一种新的培训框架,了解要激活哪些开石,以便优化歧视目标,导致更大的加速因素,无需减少任务性能。我们首先在一般结构化输出预测框架中重新重整SparSelets,然后分析SparareLets导致计算效率提升时,最后显示对象检测和图像分类任务的实验结果。我们的实验结果表明,鉴别性激活显着优于先前的重建方法,其与我们的结构化输出预测配方一起使开石广泛适用并且明显更有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号