首页> 外文期刊>Image Processing, IET >H-WordNet: a holistic convolutional neural network approach for handwritten word recognition
【24h】

H-WordNet: a holistic convolutional neural network approach for handwritten word recognition

机译:H-Wordnet:手写词识别的整体卷积神经网络方法

获取原文
获取原文并翻译 | 示例
           

摘要

Segmentation of handwritten words into isolated characters and their recognition are challenging due to the presence of high variability and cursiveness in Indian scripts. The complex shapes and availability of numerous atomic character classes, compound characters, modifiers, ascendants, and descendants make the recognition task even more difficult. A holistic approach effectively tackles such issues by avoiding the character-level segmentation and the earlier holistic methods have been mostly developed using multi-stage machine learning architecture. In this study, a deep convolutional neural network-based holistic method termed 'H-WordNet' is proposed for handwritten word recognition. The H-WordNet model includes merely four convolutional layers and one fully connected layer to effectively classify the word images', which lead to a significant reduction in parameters. The efficacy of different pooling operations with the proposed model is investigated. The main purpose of this study is to avoid the need for handcrafted feature extraction and obtain a more stable and generalised system for word recognition. The proposed model is evaluated using a standard handwritten Bangla word database (CMATERdb2.1.2), which contains 18000 Bangla word images of 120 different categories and it obtained a higher recognition accuracy of 96.17% when compared to recent state-of-the-art methods.
机译:由于印度脚本中的高度变异性和修正率存在,手写单词的分割及其认可是挑战。许多原子字符类别,复合字符,修饰符,升级和后代的复杂形状和可用性使得识别任务更加困难。整体方法通过避免性格级分割,并且使用多级机器学习架构主要开发出较早的整体方法。在这项研究中,提出了一种被称为“H-Wordnet”的基于深度卷积神经网络的整体方法,用于手写字识别。 H-Wordnet模型仅包括四个卷积层和一个完全连接的层,以有效地分类字图像,这导致参数的显着降低。研究了与所提出的模型的不同汇集操作的功效。本研究的主要目的是避免手工制作的特征提取,并获得更稳定的单词识别的广义系统。使用标准手写的Bangla Word数据库(CMATERDB2.1.2)评估所提出的模型,其中包含120个不同类别的18000个Bangla Word图像,与最近最先进的方法相比,它获得了96.17%的更高识别精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号