首页> 外文会议>IEEE Data Science Workshop >Introducing Graph Smoothness Loss for Training Deep Learning Architectures
【24h】

Introducing Graph Smoothness Loss for Training Deep Learning Architectures

机译:介绍深度学习架构的图形平滑度损失

获取原文

摘要

We introduce a novel loss function for training deep learning architectures to perform classification. It consists in minimizing the smoothness of label signals on similarity graphs built at the output of the architecture. Equivalently, it can be seen as maximizing the distances between the network function images of training inputs from distinct classes. As such, only distances between pairs of examples in distinct classes are taken into account in the process, and the training does not prevent inputs from the same class to be mapped to distant locations in the output domain. We show that this loss leads to similar performance in classification as architectures trained using the classical cross-entropy, while offering interesting degrees of freedom and properties. We also demonstrate the interest of the proposed loss to increase robustness of trained architectures to deviations of the inputs.
机译:我们介绍了培训深度学习架构进行分类的新型损失功能。它包括最大限度地减少在架构输出的相似性图上的标签信号的平滑度。等效地,可以看到从不同类别的训练输入的网络功能图像之间的距离最大化。这样,在该过程中考虑不同类别中的示例对之间的距离,并且训练不会阻止与同一类的输入映射到输出域中的远处位置。我们表明,这种损失导致分类中的类似性能作为使用经典交叉熵训练的架构,同时提供有趣的自由度和属性。我们还表明拟议损失的兴趣将培训的架构的稳健性增加到投入的偏差。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号