首页> 外文会议>International Conference on System Engineering and Technology >Comparison study of neural network and deep neural network on repricing GAP prediction in Indonesian conventional public bank
【24h】

Comparison study of neural network and deep neural network on repricing GAP prediction in Indonesian conventional public bank

机译:印度尼西亚传统公共银行神经网络与深度神经网络对GAP定价重新定价的比较研究

获取原文

摘要

The economic condition of a country is determined by the bank condition of that country it self, so that's why the whole world keep looking for the most effective and efficient method in conducting the assessment of the bank performance condition. Risk rate is commonly used to perform an analysis of the health condition of the bank, mainly using repricing gap. This research is propose deep neural network (DNN) which is one of deep learning algorithm as a method to predict the value of the repricing gap of the bank. The experiments is do a performance comparison of two methods that are neural network - standard backpropagation (SB) and DNN, by using 10 years historical data of monthly report. In DNN is using autoencoder which is an unsupervised algorithm as initiator value gradient on the formation of the network. The experimental results of both methods is the performance of DNN is better than SB. Using 30700 iteration, obtained the MSE value for DNN is more lower than SB. At high iteration amount, DNN still consistent to press the MSE value, and it's different with SB that start slowing down at the earliest iteration point, about 5700 iteration. DNN network topology that used in the experiments is the topology that can produce a model that have the most minimum MSE value, and that is equal network topology. (Every hidden layer neuron has the same number of units).
机译:一个国家的经济状况取决于该国自身的银行状况,这就是为什么全世界都在寻求最有效的方法进行银行绩效状况评估的原因。风险率通常用于对银行的健康状况进行分析,主要使用重新定价缺口。这项研究提出了深度神经网络(DNN),它是一种深度学习算法,可以预测银行的重定价缺口值。通过使用10年的月度历史数据,实验对两种方法进行了性能比较,这两种方法都是神经网络-标准反向传播(SB)和DNN。在DNN中,使用自动编码器(这是一种不受监督的算法)作为网络形成时的初始值梯度。两种方法的实验结果是DNN的性能均优于SB。使用30700迭代,获得的DNN的MSE值比SB低得多。在高迭代量下,DNN仍然可以按下MSE值,而SB与最早的迭代点(约5700次迭代)开始变慢的情况有所不同。实验中使用的DNN网络拓扑是可以生成具有最小MSE值且等于网络拓扑的模型的拓扑。 (每个隐藏层神经元具有相同数量的单位)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号