首页> 外文期刊>Journal of signal processing systems for signal, image, and video technology >Robust and Efficient Pattern Classification using Large Geometric Margin Minimum Classification Error Training
【24h】

Robust and Efficient Pattern Classification using Large Geometric Margin Minimum Classification Error Training

机译:使用大几何裕度最小分类误差训练的鲁棒高效模式分类

获取原文
获取原文并翻译 | 示例

摘要

Recently, one of the standard discriminative training methods for pattern classifier design, i.e., Minimum Classification Error (MCE) training, has been revised, and its new version is called Large Geometric Margin Minimum Classification Error (LGM-MCE) training. It is formulated by replacing a conventional misclassification measure, which is equivalent to the so-called functional margin, with a geometric margin that represents the geometric distance between an estimated class boundary and its closest training pattern sample. It seeks the status of the trainable classifier parameters that simultaneously correspond to the minimum of the empirical average classification error count loss and the maximum of the geometric margin. Experimental evaluations showed the fundamental utility of LGM-MCE training. However, to increase its effectiveness, this new training required careful setting for hyperparameters, especially the smoothness degree of the smooth classification error count loss. Exploring the smoothness degree usually requires many trial-and-error repetitions of training and testing, and such burdensome repetition does not necessarily lead to an optimal smoothness setting. To alleviate this problem and further increase the effect of geometric margin employment, we apply in this paper a new idea that automatically determines the loss smoothness of LGM-MCE training. We first introduce a new formalization of it using the Parzen estimation of error count risk and formalize LGM-MCE training that incorporates a mechanism of automatic loss smoothness determination. Importantly, the geometric-margin-based misclassification measure adopted in LGM-MCE training is directly linked with the geometric margin in a pattern sample space. Based on this relation, we also prove that loss smoothness affects the production of virtual samples along the estimated class boundaries in pattern sample space. Finally, through experimental evaluations and in comparisons with other training methods, we elaborate the characteristics of LGM-MCE training and its new function that automatically determines an appropriate loss smoothness degree.
机译:最近,对用于模式分类器设计的标准判别训练方法之一,即最小分类误差(MCE)训练进行了修订,其新版本称为大几何余量最小分类误差(LGM-MCE)训练。它是通过用表示估计的类边界与其最接近的训练模式样本之间的几何距离的几何余量来代替等效于所谓的功能余量的常规误分类度量来制定的。它寻找可训练的分类器参数的状态,这些参数同时对应于经验平均分类错误计数损失的最小值和几何余量的最大值。实验评估表明LGM-MCE训练的基本用途。但是,为了提高其有效性,此新训练需要仔细设置超参数,尤其是平滑分类错误计数损失的平滑度。探索平滑度通常需要进行多次反复的训练和测试,这种繁琐的重复不一定会导致最佳的平滑度设置。为了缓解此问题并进一步提高几何余量就业的效果,我们在本文中应用了一种新的思想,该思想自动确定LGM-MCE训练的损失平滑度。我们首先使用Parzen估计错误计数风险来对其进行新的形式化,然后对LGM-MCE训练进行形式化,该训练结合了自动损失平滑度确定机制。重要的是,LGM-MCE训练中采用的基于几何边距的误分类度量与模式样本空间中的几何边距直接相关。基于此关系,我们还证明损失平滑度会影响模式样本空间中沿估计类边界的虚拟样本的生成。最后,通过实验评估并与其他训练方法进行比较,我们阐述了LGM-MCE训练的特点及其新功能,该功能可以自动确定适当的损失平滑度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号