首页> 外文会议>International conference on artificial neural networks >Hyper Neuron - One Neuron with Infinite States
【24h】

Hyper Neuron - One Neuron with Infinite States

机译:超神经元-具有无限状态的一个神经元

获取原文

摘要

Neural networks learn by creating a decision boundary. The shape and smoothness of the decision boundary is ultimately determined by the activation function and the architecture. As the number of neurons in an artificial neural network increase to infinity, the decision function becomes smooth. A network with an infiniete number of neurons is impossible to implement with finite resources, but the behavior of such a network can be modeled to an arbitrary degree of precision by using standard numerical techniques. We named the resulting model Hyper Neuron. A flexible characteristic function controlling the rate of variations in the weights of these neurons is used. The Hyper Neuron does not require any assumptions about the parameter distribution. It utilizes a numerical methodology that contrasts with previous work (such as infinite neural networks) which relies on assumptions about the distribution. In the classical model of a neuron, each neuron has a single state and output which is determined by an input, the weights, and the bias. Consider a neuron with more than one distinct output for the same input. A layer made from an infinite number of these neurons can be modeled as a single neuron with an infinite number of states and an infinite weight field. This kind of neuron is called a "Hyper Neuron" indicated by symbol 1. Now consider the independent variable x where the Hyper Neuron is defined over it i.e., the function 1(x) is defined over the space x ∈ R~N To model the data in such a way that it represents the target distribution, we use weighted inputs and non-linearity functions, where the weights are not vectors but instead are multidimensional functions f_(ch)/~(k,k+1)(x_k,x_(k+1);p_(k+1)) which define the weight field between two Hyper Neurons when the input is given by another Hyper Neuron. This function is simplified as f_(ch)/~(i_k,k+1) (x_(k+1);p_(k+1) when the input is a feature space or a conventional layer. In these equations k is the previous layer, k + 1 is the layer with the Hyper Neuron, i_k is the ith element in the previous layer and p represents parameters of the weight field function. Hyper Neurons follow naturally from numerical models involving an infinite number of conventional neurons in a single layer and the associated weight fields are described by characteristic functions. To validate this idea, experiments were performed that used sinusoidal functions for the weight fields, because they allowed rapid changes due to the inclusion of their frequency as a parameter that the network learned from the input data. A comparison between the proposed model and a conventional model containing up to 7 neurons was performed.
机译:神经网络通过创建决策边界来学习。决策边界的形状和平滑度最终由激活函数和体系结构确定。随着人工神经网络中神经元的数量增加到无穷大,决策函数变得平滑。具有无限数量的神经元的网络不可能用有限的资源来实现,但是可以通过使用标准数值技术将这种网络的行为建模到任意精度。我们将生成的模型命名为Hyper Neuron。使用控制这些神经元权重变化率的灵活特征函数。 Hyper Neuron不需要关于参数分布的任何假设。它利用了一种数值方法,该方法与以前的工作(例如无限神经网络)形成对比,后者依赖于关于分布的假设。在神经元的经典模型中,每个神经元具有单个状态和输出,该状态和输出由输入,权重和偏差确定。考虑同一输入具有多个不同输出的神经元。由无限数量的这些神经元组成的层可以建模为具有无限数量的状态和无限权重场的单个神经元。这种神经元称为符号1所示的“超神经元”。现在考虑自变量x,其中在其上定义了超神经元,即在空间x∈R〜N上定义了函数1(x)。以表示目标分布的方式表示数据,我们使用加权输入和非线性函数,其中权重不是向量,而是多维函数f_(ch)/〜(k,k + 1)(x_k, x_(k + 1); p_(k + 1))定义了当另一个超级神经元提供输入时两个超级神经元之间的权重字段。当输入是特征空间或常规层时,此函数简化为f_(ch)/〜(i_k,k + 1)(x_(k + 1); p_(k + 1)。上一层,k + 1是具有超神经元的层,i_k是上一层中的第ith个元素,p表示权重场函数的参数。超神经元自然地遵循单个模型中涉及无限多个常规神经元的数值模型层和相关的权重场用特征函数来描述,为了验证这一思想,我们进行了一些实验,将正弦函数用于权重场,这是因为它们允许频率快速变化,因为它们包含了频率作为网络从网络学到的参数。在建议的模型与包含多达7个神经元的常规模型之间进行了比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号