首页> 外文期刊>Journal of Imaging Science and Technology >Limitations of CNNs for Approximating the Ideal Observer Despite Quantity of Training Data or Depth of Network
【24h】

Limitations of CNNs for Approximating the Ideal Observer Despite Quantity of Training Data or Depth of Network

机译:尽管培训数据或网络景深数量,CNN近似近似理想观察者的限制

获取原文
获取原文并翻译 | 示例
           

摘要

AbstractThe performance of a convolutional neural network (CNN) on an image texture detection task as a function of linear image processing and the number of training images is investigated. Performance is quantified by the area under (AUC) the receiver operating characteristic(ROC) curve. The Ideal Observer (IO) maximizes AUC but depends on high-dimensional image likelihoods. In many cases, the CNN performance can approximate the IO performance. This work demonstrates counterexamples where a full-rank linear transform degrades the CNN performance below the IO inthe limit of large quantities of training data and network layers. A subsequent linear transform changes the images’ correlation structure, improves the AUC, and again demonstrates the CNN dependence on linear processing. Compression strictly decreases or maintains the IO detection performancewhile compression can increase the CNN performance especially for small quantities of training data. Results indicate an optimal compression ratio for the CNN based on task difficulty, compression method, and number of training images.
机译:摘要卷积神经网络(CNN)对图像纹理检测任务的性能作为线性图像处理的函数和训练图像的数量。在接收器操作特征(ROC)曲线下的(AUC)下的区域量化性能。理想的观察者(IO)最大化AUC但取决于高维图像的可能性。在许多情况下,CNN性能可以近似IO性能。这项工作展示了对大量训练数据和网络层的IO INTHE限制下方的CNN性能降低了CNN性能的反例。随后的线性变换改变图像的相关结构,改善AUC,并再次演示了对线性处理的CNN依赖性。压缩严格降低或维持IO检测性能,可以增加CNN性能,尤其是少量训练数据。结果基于任务难度,压缩方法和训练图像的数量表示CNN的最佳压缩比。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号