首页> 外文期刊>Pattern Analysis and Applications >A novel approach for scene text extraction from synthesized hazy natural images
【24h】

A novel approach for scene text extraction from synthesized hazy natural images

机译:一种新的综合朦胧自然图像的文本提取方法

获取原文
获取原文并翻译 | 示例

摘要

Abstract The most important intricacy when processing natural scene text images is the existence of fog, smoke or haze. These intrusion elements decrease the contrast and disrupt the color fidelity of the image for various computer vision applications. In this paper, such a challenging issue is addressed. The intended work presents a novel method, that is, single image dehazing, based on transmission map. The contributions are performed in the following ways: (1) text extraction from hazy image is not straightforward due to lack of haze-free images and hazy images. To address this limitation, we introduce synthetic natural scene text image composed of pairs of synthetic hazy and corresponding haze-free images using mainstream datasets. Different from existing dehazing datasets, text in hazy images is considered compulsory content, which needs to be separated from background using the recovered image. For doing this, based on transmission map the scenic depth is calculated using haze density and color attenuation to generate depth map. In the next step, raw transmission map is computed, which is further refined using bilateral filtering to preserve edges and avoid possible noise; (2) text region proposals are estimated on the restored images using novel low-level connected component technique and character bounding is employed to complete the process. Finally, the experimentations are carried out on the images selected from standard datasets including MSRA-TD500, SVT and KAIST. The experimental outcomes demonstrate that the intended method performs better when compared with benchmark standard techniques and publically available dehazing datasets.
机译:摘要加工自然场景文本图像时最重要的复杂性是雾,烟雾或阴霾的存在。这些侵入元件降低了对比度并破坏了各种计算机视觉应用的图像的颜色保真度。在本文中,解决了这种具有挑战性的问题。预期工作提出了一种新的方法,即根据传输映射,单次图像去吸附。贡献以下面的方式执行:(1)由于缺乏无阴霾图像和朦胧图像,从朦胧图像中提取的文本提取并不简单。为了解决这个限制,我们使用主流数据集介绍由合成朦胧和无相应的阴霾图像成对组成的合成自然场景文本图像。与现有的除垢数据集不同,朦胧图像中的文本被视为强制性内容,需要使用恢复的图像与背景分开。为此,基于传输映射,使用雾度密度和颜色衰减来计算景区深度以产生深度图。在下一步骤中,计算原始传输图,其使用双侧滤波进一步改进以保护边缘并避免可能的噪声; (2)根据使用小说低级连接分量技术估计文本区域提案在恢复的图像上估计,使用字符边界来完成该过程。最后,在选自标准数据集的图像上进行实验,包括MSRA-TD500,SVT和KAIST。实验结果表明,与基准标准技术和公开可用的脱水数据集相比,预期方法更好地执行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号