首页> 外文期刊>Journal of visual communication & image representation >AMSFF-Net: Attention-Based Multi-Stream Feature Fusion Network for Single Image Dehazing
【24h】

AMSFF-Net: Attention-Based Multi-Stream Feature Fusion Network for Single Image Dehazing

机译:AMSFF-Net:基于注意力的单图像去雾多流特征融合网络

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

this paper, an end-to-end convolutional neural network is proposed to recover haze-free image named as Attention-Based Multi-Stream Feature Fusion Network (AMSFF-Net). The encoder-decoder network structure is used to construct the network. An encoder generates features at three resolution levels. The multi-stream features are extracted using residual dense blocks and fused by feature fusion blocks. AMSFF-Net has ability to pay more attention to informative features at different resolution levels using pixel attention mechanism. A sharp image can be recovered by the good kernel estimation. Further, AMSFF-Net has ability to capture semantic and sharp textural details from the extracted features and retain high-quality image from coarse-to-fine using mixed-convolution attention mechanism at decoder. The skip connections decrease the loss of image details from the larger receptive fields. Moreover, deep semantic loss function emphasizes more semantic information in deep features. Experimental findings prove that the proposed method outperforms in synthetic and real-world images.
机译:该文提出一种端到端卷积神经网络来恢复无雾霾图像,命名为基于注意力的多流特征融合网络(AMSFF-Net)。采用编码器-解码器网络结构构建网络。编码器以三个分辨率级别生成要素。使用残差密集块提取多流特征,并通过特征融合块进行融合。AMSFF-Net能够使用像素注意力机制更加关注不同分辨率水平的信息特征。通过良好的内核估计可以恢复清晰的图像。此外,AMSFF-Net能够从提取的特征中捕获语义和清晰的纹理细节,并在解码器上使用混合卷积注意力机制保留从粗到细的高质量图像。跳过连接可减少较大感受野中图像细节的丢失。此外,深度语义损失函数在深度特征中强调更多的语义信息。实验结果表明,所提方法在合成图像和真实图像中均优于合成图像和真实图像。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号