In this paper, we investigate the gender recognition problem of people in photos via clothing information other than faces in the case of insufficient face specification. Similar to human's intuition on telling a person's gender from his/her dressing, we formulate this problem as a binary classification problem based on features extracted from semantic regions of clothing. Given a query image, we first apply category-level clothing parsing to divide the clothes into several semantic regions, such as blazers, shirts, jeans and so on. From each region, we obtain a local estimation on gender by classifying features describing color, texture and shape as middle level attributes. We then leverage an offline learned Mahalanobis distance metric on the middle level attributes to yield a final prediction on gender. Finally, We evaluate our method on proposed novel dataset and compare with state-of-art methods based on face specification.
展开▼