This disclosure relates generally to a system and a method for mitigating generalization loss in deep neural network for time series classification. In an embodiment, the disclosed method includes compute an entropy of a timeseries training dataset, and a mean and a variance of the entropy and a regularization factor is computed. A plurality of iterations are performed to dynamically adjust the learning rate of the deep Neural Network (DNN) using a Mod-Adam optimization, and obtain a network parameter, and based on the network parameter, the regularization factor is updated to obtain an updated regularized factor. The learning rate is adjusted in the plurality of iterations by repeatedly updating the network parameter based on a variation of a generalization loss during the plurality of iterations. The updated regularized factor of the current iteration is used for adjusting the learning rate in a subsequent iteration of the plurality of iterations.
展开▼