首页> 外文会议> >Artificial Neural Networks Training Acceleration Through Network Science Strategies
【24h】

Artificial Neural Networks Training Acceleration Through Network Science Strategies

机译:人工神经网络通过网络科学策略训练加速

获取原文

摘要

Deep Learning opened artificial intelligence to an unprecedented number of new applications. A critical success factor is the ability to train deeper neural networks, striving for stable and accurate models. This translates into Artificial Neural Networks (ANN) that become unmanageable as the number of features increases. The novelty of our approach is to employ Network Science strategies to tackle the complexity of the actual ANNs at each epoch of the training process. The work presented herein originates in our earlier publications, where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of our approach has also been recently confirmed by independent researchers, who managed to train a million-node ANN on non-specialized laptops. Encouraged by these results, we have now moved into having a closer look at some tunable parameters of our previous approach to pursue a further acceleration effect. We now investigate on the revise fraction parameter, to verify the necessity of the role of its double-check. Our method is independent of specific machine learning algorithms or datasets, since we operate merely on the topology of the ANNs. We demonstrate that the revise phase can be avoided in order to half the overall execution time with an almost negligible loss of quality.
机译:深度学习将人工智能带入了前所未有的新应用领域。成功的关键因素是能够训练更深层的神经网络,以争取稳定和准确的模型。这转化为人工神经网络(ANN),随着功能数量的增加,这种人工神经网络变得难以管理。我们的方法的新颖之处在于采用网络科学策略来解决实际ANN在训练过程的每个时期的复杂性。本文介绍的工作起源于我们较早的出版物,在该出版物中,我们探索了通过在ANN训练过程中依次执行无标度,小世界性和稀疏性而获得的加速效果。独立研究人员最近也证实了我们方法的效率,他们设法在非专业笔记本电脑上训练了100万个节点的人工神经网络。受这些结果的鼓舞,我们现在进入了仔细研究以前方法的一些可调参数的过程,以追求进一步的加速效果。现在,我们研究修订分数参数,以验证其双重检查作用的必要性。我们的方法独立于特定的机器学习算法或数据集,因为我们仅在ANN的拓扑上进行操作。我们证明,可以避免修改阶段,以使整个执行时间减半,而质量损失几乎可以忽略不计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号