首页> 外文会议>International Conference on Document Analysis and Recognition >Selective Super-Resolution for Scene Text Images
【24h】

Selective Super-Resolution for Scene Text Images

机译:场景文本图像的选择性超分辨率

获取原文

摘要

In this paper, we realize the enhancement of super-resolution using images with scene text. Specifically, this paper proposes the use of Super-Resolution Convolutional Neural Networks (SRCNN) which are constructed to tackle issues associated with characters and text. We demonstrate that standard SRCNNs trained for general object super-resolution is not sufficient and that the proposed method is a viable method in creating a robust model for text. To do so, we analyze the characteristics of SRCNNs through quantitative and qualitative evaluations with scene text data. In addition, analysis using the correlation between layers by Singular Vector Canonical Correlation Analysis (SVCCA) and comparison of filters of each SRCNN using t-SNE is performed. Furthermore, in order to create a unified super-resolution model specialized for both text and objects, a model using SRCNNs trained with the different data types and Content-wise Network Fusion (CNF) is used. We integrate the SRCNN trained for character images and then SRCNN trained for general object images, and verify the accuracy improvement of scene images which include text. We also examine how each SRCNN affects super-resolution images after fusion.
机译:在本文中,我们通过现场文本的图像实现了超分辨率的增强。具体地,本文提出了使用超分辨率的卷积神经网络(SRCNN),该网络(SRCNN)构造成用于解决与字符和文本相关的问题。我们演示了对一般物体超分辨率培训的标准SRCNNS不足并且所提出的方法是创建文本鲁棒模型的可行方法。为此,我们通过使用场景文本数据的定量和定性评估来分析SRCNN的特征。另外,使用奇异载体典型相关分析(SVCCA)之间的层之间的相关性分析以及使用T-SNE的每个SRCNN的滤波器的比较。此外,为了创建专门用于文本和对象的统一超分辨率模型,使用使用具有不同数据类型和内容 - 方向网络融合(CNF)训练的SRCNNS的模型。我们集成了对字符图像训练的SRCNN,然后进行了对一般对象图像训练的SRCNN,并验证了包括文本的场景图像的准确性改进。我们还研究了融合后的每个SRCNN如何影响超分辨率图像。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号