The number of object classes is increasing rapidly leading to the recognition of new classes difficult. Zero-shot learning aims to predict the labels of the new class samples by using the seen class samples and their semantic representations. In this paper, we propose a simple method to learn the unseen visual prototypes (LUVP) by learning the projection function from semantic space to visual feature space to reduce hubness problem. We exploit the class level samples rather than instance level samples, which can alleviate expensive computational costs. Since the disjointness of seen and unseen classes, directly applying the projection function to unseen samples will cause a domain shift problem. Thus, we preserve the unseen label semantic correlations and then adjust the unseen visual prototypes to minimize the domain shift problem. We demonstrate through extensive experiments that the proposed method (1) alleviates the hubness problem, (2) overcomes the domain shift problem, and (3) significantly outperforms existing methods for zero-shot classification on five benchmark datasets.
展开▼