Light field can record the four-dimensional information of light rays, i.e. the position and direction information in which depth information is implied. To improve the depth estimation accuracy, we propose a depth estimation algorithm based on convolutional neural network (CNN). First, a single image super resolution algorithm is adopted to spatially super resolve the sub-aperture images (SAIs). Second, to adapt the texture complexity, the SAIs are partitioned into two regions, i.e., simple texture region and complex texture region, based on the texture analysis of the central SAI. Third, the epipolar plane images (EPIs) in horizontal, vertical, 45 degree diagonal, and 135 degree diagonal directions for both complex and simple texture regions are extracted, and the corresponding EPIs for the simple and complex texture regions are fed into the specified network branches. Finally, a fusion module is designed to generate the depth map. Experimental results show that the quality of the estimated depth maps by the proposed method is better than the state-of-the-art methods in terms of both objective quality and subjective quality. Moreover, the proposed method is more robust to noise.
展开▼