...
首页> 外文期刊>International Journal on Computer Science and Engineering >Security of Data Fragmentation and Replication over Un-trusted Hosts
【24h】

Security of Data Fragmentation and Replication over Un-trusted Hosts

机译:数据碎片和复制对不可信任的主机的安全性

获取原文
   

获取外文期刊封面封底 >>

       

摘要

The purpose of this work is to analyze the performance of back-propagation feed-forward algorithm using various different activation functions for the neurons of hidden and output layer and varying the number of neurons in the hidden layer. For sample creation, 250 numerals were gathered form 35 people of different ages including male and female. After binarization, these numerals were clubbed together to form training patterns for the neural network. Network was trained to learn its behavior by adjusting the connection strengths at every iteration. The conjugate gradient descent of each presented training pattern was calculated to identify the minima on the error surface for each training pattern. Experiments were performed by selecting different combinations of two activation functions out of the three activation functions logsig, tansig and purelin for the neurons of the hidden and output layers and the results revealed that as the number of neurons in the hidden layer is increased, the network gets trained in small number of epochs and the percentage recognition accuracy of the neural network was observed to increase up to certain level and then it starts decreasing when number of hidden neurons exceeds a certain level.
机译:本作作品的目的是分析使用隐藏和输出层的神经元的各种不同的激活功能来分析反向传播前馈算法的性能,并改变隐藏层中的神经元数。对于样本创作,收集了250个数字,表格35人不同年龄包括男性和女性。二值化后,这些标号将俱乐部培养,形成神经网络的训练模式。通过在每次迭代时调整连接优势,网络培训以学习其行为。计算每个呈现的训练模式的共轭梯度下降,以识别每个训练模式的误差表面上的最小值。通过选择两个激活功能的不同组合来进行实验,从三个激活功能中,对于隐藏和输出层的神经元,丹曲根和purelin,结果表明,随着隐藏层中的神经元数增加,网络在少量时期进行培训,观察到神经网络的百分比识别准确性,以增加一定程度,然后当隐藏的神经元数量超过一定水平时,它开始减少。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号