首页> 外文期刊>電子情報通信学会技術研究報告 >Non-sparse Feature Mixing in Object Classification
【24h】

Non-sparse Feature Mixing in Object Classification

机译:对象分类中的非稀疏特征混合

获取原文
获取原文并翻译 | 示例
       

摘要

Recent research has shown that combining various image features significantly improves the object classification performance. Multiple kernel learning (MKL) approaches, where the mixing weights at the kernel level are optimized simultaneously with the classifier parameters, give a well founded framework to control the importance of each feature. As alternatives, we can also use boosting approaches, where single kernel classifier outputs are combined with the optimal mixing weights. Most of those approaches employ an ℓ~1-regularization on the mixing weights that promote sparse solutions. Although sparsity offers several advantages, e.g., interpretability and less calculation time in test phase, the accuracy of sparse methods is often even worse than the simplest flat weights combination. In this paper, we compare the accuracy of our recently developed non-sparse methods with the standard sparse counterparts on the PASCAL VOC 2008 data set.
机译:最近的研究表明,组合各种图像特征可以显着提高对象分类性能。多个内核学习(MKL)方法(其中使用分类器参数同时优化内核级别的混合权重)提供了一个完善的框架来控制每个功能的重要性。作为替代方案,我们还可以使用增强方法,其中将单个内核分类器输出与最佳混合权重结合在一起。这些方法大多数都在混合权重上采用ℓ〜1正则化,从而促进了稀疏解。尽管稀疏性具有几个优点,例如可解释性和测试阶段更少的计算时间,但稀疏方法的准确性通常甚至比最简单的单位重量组合更差。在本文中,我们在PASCAL VOC 2008数据集上比较了我们最近开发的非稀疏方法和标准稀疏方法的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号