首页> 外文会议>IEEE International Conference on Automation, Quality, Testing and Robotics >Computational Complexity Reduction of the Support Vector Machine Classifiers for Image Analysis Tasks Through the Use of the Discrete Cosine Transform
【24h】

Computational Complexity Reduction of the Support Vector Machine Classifiers for Image Analysis Tasks Through the Use of the Discrete Cosine Transform

机译:通过使用离散余弦变换来计算用于图像分析任务的支持向量机分类器的计算复杂性。

获取原文

摘要

Support vector machines (SVMs) are powerful classifiers, with very good recognition rates in image analysis tasks. However their computational time in the object recognition phase is often large due to the number of classifications per scene and to the feature vector size, especially when the feature space is formed from raw image data. Several methods are reported in the literature to make the classification faster, as selecting only the most significant support vectors or reducing the feature vector length by image transforms (Wavelets, PCA) prior to SVM training and classification. The method we propose is different in principle. Instead of applying the transform prior to training and thus changing the representation space, we only perform a unitary orthogonal real transform in the classification phase on the resulting support vectors and on the pattern to be classified. As the inverse matrices of these transforms are exactly the transposed of the transform matrices, we mathematically prove that the dot product of any two vectors has the same expression in the original and the transformed space. This, combined with the energy compaction property of a suitable transform, leads to a faster computation of the dot products, if the transform has a fast implementation algorithm. We use the discrete cosine transform (DCT) due to its good energy compaction on digital images. Our first experiments on a face recognition application are promising: at the same recognition rate, our algorithm leads to an average 30% reduction in the number of elementary operations per classification.
机译:支持向量机(SVM)是强大的分类器,在图像分析任务中具有非常好的识别率。然而,由于每个场景的分类数量和特征向量大小,它们在对象识别阶段中的计算时间通常很大,尤其是当特征空间由原始图像数据形成时。在文献中报告了几种方法以使分类更快,因为在SVM训练和分类之前,仅选择最重要的支持向量或通过图像变换(小波,PCA)来减少特征向量长度。我们提出的方法原则上是不同的。而不是在训练之前应用变换并因此改变表示空间,而不是在所得到的支持向量上的分类阶段和图案上执行酉正交实际变换。随着这些变换的逆矩阵完全是转换矩阵的转换,我们数学上证明任何两个矢量的点产品在原始和变换空间中具有相同的表达。这与合适变换的能量压缩性相结合,如果变换具有快速实现算法,则导致点产品的更快计算。由于其在数字图像上的良好能量压实,我们使用离散余弦变换(DCT)。我们对面部识别申请的第一个实验很有前景:以相同的识别率,我们的算法导致平均每分类的基本操作数减少30%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号