Neural networks and deep learning have been inspired by brains, neuroscience and cognition, from the very beginning, starting with distributed representations, neural computation, and the hierarchy of learned features. More recently, it has been for example with the use of rectifying non-linearities (ReLU) - which enables training deeper networks - as well as the use of soft content-based attention - which allow neural nets to go beyond vectors and to process a variety of data structures and led to a breakthrough in machine translation. Ongoing research is now suggesting that brains may use a process similar to backpropagation for estimating gradients and new inspiration from cognition suggests how to learn deep representations which disentangle the underlying factors of variation, by allowing agents to intervene and explore in their environment.
展开▼