首页> 外文会议>IEEE Data Science Workshop >Introducing Graph Smoothness Loss for Training Deep Learning Architectures
【24h】

Introducing Graph Smoothness Loss for Training Deep Learning Architectures

机译:介绍图平滑度损失以训练深度学习架构

获取原文

摘要

We introduce a novel loss function for training deep learning architectures to perform classification. It consists in minimizing the smoothness of label signals on similarity graphs built at the output of the architecture. Equivalently, it can be seen as maximizing the distances between the network function images of training inputs from distinct classes. As such, only distances between pairs of examples in distinct classes are taken into account in the process, and the training does not prevent inputs from the same class to be mapped to distant locations in the output domain. We show that this loss leads to similar performance in classification as architectures trained using the classical cross-entropy, while offering interesting degrees of freedom and properties. We also demonstrate the interest of the proposed loss to increase robustness of trained architectures to deviations of the inputs.
机译:我们介绍了一种新颖的损失函数,用于训练深度学习体系结构以执行分类。它在于最大程度地减少在体系结构输出处构建的相似度图上的标签信号的平滑度。等效地,可以将其视为最大化来自不同类别的训练输入的网络功能图像之间的距离。这样,在此过程中仅考虑成对类别中的示例对之间的距离,并且训练不会阻止来自同一类别的输入映射到输出域中的较远位置。我们证明,这种损失在分类方面的性能与使用经典交叉熵训练的体系结构相似,同时提供了有趣的自由度和属性。我们还证明了所提出的损失对于增加训练有素的体系结构对输入偏差的鲁棒性的兴趣。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号