The human brain and human intelligence has come a long way having evolved the ability to do many things including solving difficult mathematical problems and understanding complex operations and computations. How these abilities are implemented within the brain is poorly understood, but typically tackled by asking what type of computations networks of neurons are capable of? Previous studies have shown how both artificial neural networks (based upon MultiLayer Perceptrons MLPs) and networks of (biologically inspired) spiking neurons can solve single tasks efficiently, such as character recognition or nonlinear function approximation [1, 5, 6]. There are, however, many nontrivial yet important industrial and physical problems that rely upon computations based on iteration and composition. There have been rare demonstrations of neural networks solving multiple tasks simultaneously but the few available examples have shown they are able to solve functional equations. This exemplifies the likelihood that the computational power of neural populations has been under estimated and their true capabilities are far greater than than previously thought. Significantly, previous studies have shown that the solution to a particular class of functional equations, called the functional iterative root or half-iterate, is attainable using MLPs and is continuous in nature [3, 4]. Methods which employ networks of spiking neurons have, til now, shown that piecewise continuous solutions are obtainable [2]. Here, we demonstrate that taking advantage of the stochastic or probabilistic nature of spike generation and population coding, spiking neural networks can learn to find solutions to iterative root that are continuous in nature. Specifically, we show how plasticity, the stochastic nature of neuronal spike generation, and population coding allows spiking neural networks to find solutions to functional equations, like the iterative root of monotonically increasing functions, in a continuous manner. Significantly, our work expands the foundations of neural-based computation by demonstrating a nontrivial underlying computational principle: robustness through uncertainty.
展开▼