首页> 外文期刊>PLoS One >An approach on the implementation of full batch, online and mini-batch learning on a Mamdani based neuro-fuzzy system with center-of-sets defuzzification: Analysis and evaluation about its functionality, performance, and behavior
【24h】

An approach on the implementation of full batch, online and mini-batch learning on a Mamdani based neuro-fuzzy system with center-of-sets defuzzification: Analysis and evaluation about its functionality, performance, and behavior

机译:在基于Mamdani的神经模糊系统上实现全批次,在线和迷你批量学习的方法,其集中式排放式Defuzzzification:分析和评估其功能,性能和行为

获取原文
       

摘要

Due to the rapid technological evolution and communications accessibility, data generated from different sources of information show an exponential growth behavior. That is, volume of data samples that need to be analyzed are getting larger, so the methods for its processing have to adapt to this condition, focusing mainly on ensuring the computation is efficient, especially when the analysis tools are based on computational intelligence techniques. As we know, if you do not have a good control of the handling of the volume of the data, some techniques that are based on learning iterative processes could represent an excessive load of computation and could take a prohibitive time in trying to find a solution that could not come close to desired. There are learning methods known as full batch, online and mini-batch, and they represent a good strategy to this problem since they are oriented to the processing of data according to the size or volume of available data samples that require analysis. In this first approach, synthetic datasets with a small and medium volume were used, since the main objective is to define its implementation and in experimentation phase through regression analysis obtain information that allows us to assess the performance and behavior of different learning methods under distinct conditions. To carry out this study, a Mamdani based neuro-fuzzy system with center-of-sets defuzzification with support of multiple inputs and outputs was designed and implemented that had the flexibility to use any of the three learning methods, which were implemented within the training process. Finally, results show that the learning method with best performances was Mini-Batch when compared to full batch and online learning methods. The results obtained by mini-batch learning method are as follows; mean correlation coefficient R ˉ with 0.8268 and coefficient of determination R 2 ˉ with 0.7444, and is also the method with better control of the dispersion between the results obtained from the 30 experiments executed per each dataset processed.
机译:由于技术演化和通信可访问性快,从不同信息来源产生的数据显示了指数增长行为。也就是说,需要分析的数据样本的体积越来越大,因此其处理的方法必须适应这种情况,主要集中在确保计算是有效的,特别是当分析工具基于计算智能技术时。正如我们所知道的,如果您没有良好地控制数据量的数据量,基于学习迭代过程的一些技术可能代表过多的计算负荷,并且可以在尝试找到解决方案时采取近期时间这不能靠近所需的。有学习方法称为完整批处理,在线和迷你批处理,它们代表了对该问题的良好策略,因为它们根据需要分析的可用数据样本的大小或体积来定向到数据的处理。在该第一种方法中,使用具有中小体积的合成数据集,因为主要目的是通过回归分析来定义其实现和实验阶段,获得允许我们在不同条件下评估不同学习方法的性能和行为的信息。为了开展本研究,设计并实施了一种带有用于多个输入和输出的集成型Defuzzzzzzzzzzzz化的Mamdani的神经模糊系统,并实现了灵活性,可以灵活地使用三种学习方法,这些方法在培训中实施过程。最后,结果表明,与完整批量和在线学习方法相比,具有最佳性能的学习方法是迷你批量。 Mini-Batch学习方法获得的结果如下;平均相关系数R≥具有0.8268的r2和测定系数R2≥具有0.7444,并且还具有更好地控制从根据每个数据集执行的30个实验获得的结果之间的分散的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号