首页> 外文会议>International Conference on System Engineering and Technology >Comparison study of neural network and deep neural network on repricing GAP prediction in Indonesian conventional public bank
【24h】

Comparison study of neural network and deep neural network on repricing GAP prediction in Indonesian conventional public bank

机译:神经网络与深神经网络对印尼传统公共银行重读差距预测的比较研究

获取原文

摘要

The economic condition of a country is determined by the bank condition of that country it self, so that's why the whole world keep looking for the most effective and efficient method in conducting the assessment of the bank performance condition. Risk rate is commonly used to perform an analysis of the health condition of the bank, mainly using repricing gap. This research is propose deep neural network (DNN) which is one of deep learning algorithm as a method to predict the value of the repricing gap of the bank. The experiments is do a performance comparison of two methods that are neural network - standard backpropagation (SB) and DNN, by using 10 years historical data of monthly report. In DNN is using autoencoder which is an unsupervised algorithm as initiator value gradient on the formation of the network. The experimental results of both methods is the performance of DNN is better than SB. Using 30700 iteration, obtained the MSE value for DNN is more lower than SB. At high iteration amount, DNN still consistent to press the MSE value, and it's different with SB that start slowing down at the earliest iteration point, about 5700 iteration. DNN network topology that used in the experiments is the topology that can produce a model that have the most minimum MSE value, and that is equal network topology. (Every hidden layer neuron has the same number of units).
机译:一个国家的经济状况取决于它自我的国家的银行条件,所以这就是为什么全世界都在寻找最有效和最有效的方法来进行银行绩效状况的评估。风险率通常用于对银行的健康状况进行分析,主要使用衡量差距。本研究提出了深度神经网络(DNN),其是深度学习算法之一,作为预测银行的重新定向差距的价值的方法。实验是通过使用10年每月报告的历史数据进行神经网络 - 标准背部化(SB)和DNN的两种方法的性能比较。在DNN中,使用AutoEncoder,它是一个无监督的算法,作为网络形成的发起者值梯度。两种方法的实验结果是DNN的性能优于SB。使用30700迭代,获得DNN的MSE值比SB更低。在高迭代量,DNN仍然一致地按下MSE值,并且它与SB不同,即在最早迭代点开始减速,大约5700次迭代。实验中使用的DNN网络拓扑是可以产生具有最小MSE值最小值的模型的拓扑,这是相同的网络拓扑。 (每个隐藏的层神经元具有相同数量的单位)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号