【24h】

E2E-MLT - An Unconstrained End-to-End Method for Multi-language Scene Text

机译:E2E-MLT-用于多语言场景文本的不受约束的端到端方法

获取原文

摘要

An end-to-end trainable (fully differentiable) method for multi-language scene text localization and recognition is proposed. The approach is based on a single fully convolutional network (FCN) with shared layers for both tasks. E2E-MLT is the first published multi-language OCR for scene text. While trained in multi-language setup, E2E-MLT demonstrates competitive performance when compared to other methods trained for English scene text alone. The experiments show that obtaining accurate multi-language multi-script annotations is a challenging problem. Code and trained models are released publicly at https://github.com/ MichalBusta/E2E-MLT.
机译:提出了一种多语言场景文本定位与识别的端到端可训练(完全可微分)方法。该方法基于具有两个任务共享层的单个完全卷积网络(FCN)。 E2E-MLT是第一个发布的用于场景文本的多语言OCR。经过多语言设置培训后,与仅针对英语场景文本进行培训的其他方法相比,E2E-MLT表现出了竞争优势。实验表明,获得准确的多语言多脚本注释是一个具有挑战性的问题。代码和经过训练的模型在https://github.com/ MichalBusta / E2E-MLT上公开发布。

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号