<正>Dear editor,Recently, deep learning (DL) has become a hot research topic and as one of the most well-known DL models, stacked autoencoder (SAE)[1] has received increasing attention. In SAE, layer-wise pretraining is the basic mechanism for automatic feature extraction and it can also avoid gradient vanishing while constructing deep architectures.
展开▼
机译:Virgin Coconut Oil Improved Discriminative Learning and Working Memory in Aging Cycling and Non-Cycling Female Sprague-Dawley Rats Supporting Its Beneficial Effect in Retarding Age-Related Cognitive Decline