Light field imaging has recently known a regain of interest due to theavailability of practical light field capturing systems that offer a wide rangeof applications in the field of computer vision. However, capturinghigh-resolution light fields remains technologically challenging since theincrease in angular resolution is often accompanied by a significant reductionin spatial resolution. This paper describes a learning-based spatial lightfield super-resolution method that allows the restoration of the entire lightfield with consistency across all sub-aperture images. The algorithm first usesoptical flow to align the light field and then reduces its angular dimensionusing low-rank approximation. We then consider the linearly independent columnsof the resulting low-rank model as an embedding, which is restored using a deepconvolutional neural network (DCNN). The super-resolved embedding is then usedto reconstruct the remaining sub-aperture images. The original disparities arerestored using inverse warping where missing pixels are approximated using anovel light field inpainting algorithm. Experimental results show that theproposed method outperforms existing light field super-resolution algorithms,achieving PSNR gains of 0.23 dB over the second best performing method. Thisperformance can be further improved using iterative back-projection as apost-processing step.
展开▼