Example-based super-resolution (SR) attracts great interest due to its wide range of applications. However, these algorithms usually involve patch search in a large database or the input image, which is computationally intensive. In this paper, we propose a scale-invariant self-similarity (SiSS) based super-resolution method. Instead of searching patches, we select the patch according to the SiSS measurement, so that the computational complexity is significantly reduced. Multi-shaped and multi-sized patches are used to collect sufficient patches for high-resolution (HR) image reconstruction and a hybrid weighting method is used to suppress the artifacts. Experimental results show that the proposed algorithm is 201,800 times faster than several state-of-the-art approaches and can achieve comparable quality.
展开▼