Most existing real time semantic segmentation models focus on leveraging global context information and large receptive field. However, these undoubtedly introduce more computational cost and limit the inference speed. Inspired by the mechanism of human eyes, we propose a novel Limited Receptive Field Network (LRFNet) which achieves a good balance between the segmentation speed and accuracy. Specifically, we design two sub-encoders: the fine encoder which encodes sufficient context information, and the coarse encoder which supplements spatial information. In order to recover high-resolution accurate outputs, we fuse the features from the two sub-encoders followed by a lightweight decoder. Extensive comparative evaluations demonstrate the advantages of our LRFNet model for real-time driving scene semantic segmentation task over many state-of-the-art methods on two standard benchmarks (Cityscapes, CamVid).
展开▼