...
首页> 外文期刊>IEEE Transactions on Fuzzy Systems >A Refined Fuzzy Min–Max Neural Network With New Learning Procedures for Pattern Classification
【24h】

A Refined Fuzzy Min–Max Neural Network With New Learning Procedures for Pattern Classification

机译:一种精致的模糊MIN-MAX神经网络,具有模式分类的新学习程序

获取原文
获取原文并翻译 | 示例
           

摘要

The fuzzy min-max (FMM) neural network stands as a useful model for solving pattern classification problems. FMM has many important features, such as online learning and one-pass learning. It, however, has certain limitations, especially in its learning algorithm, which consists of the expansion, overlap test, and contraction procedures. This article proposes a refined fuzzy min-max (RFMM) neural network with new procedures for tackling the key limitations of FMM. RFMM has a number of contributions. First, a new expansion procedure for overcoming the problems of overlap leniency and irregularity of hyperbox expansion is introduced. It avoids the overlap cases between hyperboxes from different classes, reducing the number of overlap cases to one (containment case). Second, a new formula that simplifies the original rules in the overlap test is proposed. It has two important features: (i) identifying the overlap leniency problem during the expansion procedure; (ii) activating the contraction procedure to eliminate the containment case. Third, a new contraction procedure for overcoming the data distortion problem and providing more accurate decision boundaries for the contracted hyperboxes is proposed. Fourth, a new prediction strategy that combines both membership function and distance measure to prevent any possible random decision-making during the test stage is proposed. The performance of RFMM is evaluated with the UCI benchmark datasets. The results demonstrate the effectiveness of the proposed modifications in making RFMM a useful model for solving pattern classification problems, as compared with other existing FMM and non-FMM classifiers.
机译:模糊MIN-MAX(FMM)神经网络是解决模式分类问题的有用模型。 FMM有许多重要的特征,如在线学习和一次通过学习。然而,它具有一定的限制,特别是在其学习算法中,该算法包括扩展,重叠测试和收缩程序。本文提出了一种精致的模糊MIN-MAX(RFMM)神经网络,具有解决FMM的关键限制的新程序。 RFMM有许多贡献。首先,介绍了克服了一种新的扩展程序,用于克服超孔扩展的重叠宽性和不规则性问题。它避免了来自不同类别的Hyperboxes之间的重叠案例,将重叠案例的数量减少到一个(容纳案例)。其次,提出了一种简化重叠测试中原始规则的新公式。它有两个重要特征:(i)在扩展过程中识别重叠宽性问题; (ii)激活收缩程序以消除容纳案例。第三,提出了一种新的收缩过程,用于克服数据失真问题并为收缩的超高箱提供更准确的决策边界。第四,提出了一种新的预测策略,即结合隶属函数和距离测量以防止在测试阶段期间的任何可能的随机决策。使用UCI基准数据集进行RFMM的性能。结果表明,与其他现有的FMM和非FMM分类器相比,所提出的修改使RFMM用于解决模式分类问题的有用模型的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号