首页> 外文期刊>Expert Systems with Application >A comparative study of the scalability of a sensitivity-based learning algorithm for artificial neural networks
【24h】

A comparative study of the scalability of a sensitivity-based learning algorithm for artificial neural networks

机译:基于灵敏度的人工神经网络学习算法可扩展性的比较研究

获取原文
获取原文并翻译 | 示例

摘要

Until recently, the most common criterion in machine learning for evaluating the performance of algorithms was accuracy. However, the unrestrainable growth of the volume of data in recent years in fields such as bioinformatics, intrusion detection or engineering, has raised new challenges in machine learning not simply regarding accuracy but also scalability. In this research, we are concerned with the scalability of one of the most well-known paradigms in machine learning, artificial neural networks (ANNs), particularly with the training algorithm Sensitivity-Based Linear Learning Method (SBLLM). SBLLM is a learning method for two-layer feedforward ANNs based on sensitivity analysis, that calculates the weights by solving a linear system of equations. The results show that the training algorithm SBLLM performs better in terms of scalability than five of the most popular and efficient training algorithms for ANNs.
机译:直到最近,机器学习中用于评估算法性能的最常见标准还是准确性。但是,近年来,诸如生物信息学,入侵检测或工程学等领域的数据量迅猛增长,不仅在准确性上而且在可扩展性方面也给机器学习带来了新的挑战。在这项研究中,我们关注机器学习中最著名的范例之一,即人工神经网络(ANN)的可伸缩性,尤其是基于基于灵敏度的线性学习方法(SBLLM)的训练算法。 SBLLM是一种基于灵敏度分析的两层前馈ANN的学习方法,它通过求解线性方程组来计算权重。结果表明,训练算法SBLLM在可伸缩性方面比五个最流行,最有效的ANN训练算法更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号