首页> 外文会议>Workshop on semantic deep learning >Class disjointness constraints as specific objective functions in neural network classifiers
【24h】

Class disjointness constraints as specific objective functions in neural network classifiers

机译:类脱节约束作为神经网络分类器中的特定目标函数

获取原文

摘要

Increasing performance of deep learning techniques on computer vision tasks like object detection has led to systems able to detect a large number of classes of objects. Most deep learning models use simple unstructured labels and assume that any domain knowledge will be learned from the data. However when the domain is complex and the data limited, it may be useful to use domain knowledge encoded in an ontology to guide the learning process. In this paper, we conduct experiments to introduce constraints into the training process of a neural network. We show that explicitly modeling a disjointness axiom between a set of classes as a specific objective function leads to reducing violations for this constraint, while also reducing the overall classification error. This opens a way to import domain knowledge modeled in an ontology into a deep learning process.
机译:在电脑视觉任务中提高深度学习技术的性能,如物体检测,导致了能够检测大量对象的系统。大多数深度学习模型使用简单的非结构化标签,并假设将从数据中学习任何域知识。然而,当域复杂并且数据有限的时候,在本体中编码的域知识可能是有用的,以指导学习过程。在本文中,我们进行实验,向神经网络的训练过程引入限制。我们表明,在一组类之间显式建模不相交的公理,作为特定的目标函数导致违反此约束的违规,同时还降低了整体分类错误。这开启了一种进入在本体中建模的域知识进入深度学习过程。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号