首页> 外文会议>International Conference on Artificial Neural Networks >Unsharp Masking Layer: Injecting Prior Knowledge in Convolutional Networks for Image Classification
【24h】

Unsharp Masking Layer: Injecting Prior Knowledge in Convolutional Networks for Image Classification

机译:unsharp屏蔽层:注入卷积网络的先验知识进行图像分类

获取原文

摘要

Image enhancement refers to the enrichment of certain image features such as edges, boundaries, or contrast. The main objective is to process the original image so that the overall performance of visualization, classification and segmentation tasks is considerably improved. Traditional techniques require manual fine-tuning of the parameters to control enhancement behavior. To date, recent Convolutional Neural Network (CNN) approaches frequently employ the aforementioned techniques as an enriched pre-processing step. In this work, we present the first intrinsic CNN pre-processing layer based on the well-known unsharp masking algorithm. The proposed layer injects prior knowledge about how to enhance the image, by adding high frequency information to the input, to subsequently emphasize meaningful image features. The layer optimizes the unsharp masking parameters during model training, without any manual intervention. We evaluate the network performance and impact on two applications: CIFAR100 image classification, and the PlantCLEF identification challenge. Results obtained show a significant improvement over popular CNNs, yielding 9.49% and 2.42% for PlantCLEF and general-purpose CIFAR100, respectively. The design of an unsharp enhancement layer plainly boosts the accuracy with negligible performance cost on simple CNN models, as prior knowledge is directly injected to improve its robustness.
机译:图像增强是指某些图像特征的富集,例如边缘,边界或对比度。主要目标是处理原始图像,以便显着改善可视化,分类和分割任务的整体性能。传统技术需要手动微调参数来控制增强行为。迄今为止,最近的卷积神经网络(CNN)方法经常使用上述技术作为富含预处理步骤。在这项工作中,我们基于众所周知的Unsharp掩蔽算法介绍了第一内在CNN预处理层。所提出的层通过将高频信息添加到输入来注入关于如何增强图像的先验知识,以随后强调有意义的图像特征。该层在模型培训期间优化了Unsharp屏蔽参数,而无需任何手动干预。我们评估网络性能和对两种应用的影响:CiFAR100图像分类,以及Plantclef识别挑战。得到的结果显示出对流行的CNNS的显着改善,分别为Plantclef和通用CIFAR100产生9.49%和2.42%。 unsharp增强层的设计显然提高了简单的CNN模型的可忽略的性能成本,因为之前的知识直接注入以提高其鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号