Certain embodiments may generally relate to various techniques for machine learning. Feed-forward, fully-connected Artificial Neural Networks (ANNs), or the so-called Multi- Layer Perceptrons (MLPs) are well-known universal approximators. However, their learning performance may vary significantly depending on the function or the solution space that they attempt to approximate for learning. This is because they are based on a loose and crude model of the biological neurons promising only a linear transformation followed by a nonlinear activation function. Therefore, while they learn very well those problems with a monotonous, relatively simple and linearly separable solution space, they may entirely fail to do so when the solution space is highly nonlinear and complex. In order to address this drawback and also to accomplish a more generalized model of biological neurons and learning systems, Generalized Operational Perceptrons (GOPs) may be formed and they may encapsulate many linear and nonlinear operators.
展开▼