Unsupervised learning algorithms are typically concerned with identifying unspecified structure underlying a set of data patterns. This is done by converting the patterns into internal representations of the neural network. While the various known learning schemes follow different approaches for performing this task, we understand the learning of representations as an enclosure for these network models. It is difficult to judge about the quality of a representation gained for aiding the understanding of data, but an adequate representation might be one allowing a reconstruction of the original input data under control of a synthetic model. Therefore, we plead for considering these models as essential. Here, we review and compare three models for unsupervised learning of representations in the presence of their synthetic (or generative) models for inverting the process of creating representations.
展开▼