首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Hypergraph-Induced Convolutional Networks for Visual Classification
【24h】

Hypergraph-Induced Convolutional Networks for Visual Classification

机译:超图诱导卷积网络的视觉分类

获取原文
获取原文并翻译 | 示例

摘要

At present, convolutional neural networks (CNNs) have become popular in visual classification tasks because of their superior performance. However, CNN-based methods do not consider the correlation of visual data to be classified. Recently, graph convolutional networks (GCNs) have mitigated this problem by modeling the pairwise relationship in visual data. Real-world tasks of visual classification typically must address numerous complex relationships in the data, which are not fit for the modeling of the graph structure using GCNs. Therefore, it is vital to explore the underlying correlation of visual data. Regarding this issue, we propose a framework called the hypergraph-induced convolutional network to explore the high-order correlation in visual data during deep neural networks. First, a hypergraph structure is constructed to formulate the relationship in visual data. Then, the high-order correlation is optimized by a learning process based on the constructed hypergraph. The classification tasks are performed by considering the high-order correlation in the data. Thus, the convolution of the hypergraph-induced convolutional network is based on the corresponding high-order relationship, and the optimization on the network uses each data and considers the high-order correlation of the data. To evaluate the proposed hypergraph-induced convolutional network framework, we have conducted experiments on three visual data sets: the National Taiwan University 3-D model data set, Princeton Shape Benchmark, and multiview RGB-depth object data set. The experimental results and comparison in all data sets demonstrate the effectiveness of our proposed hypergraph-induced convolutional network compared with the state-of-the-art methods.
机译:目前,卷积神经网络(CNN)由于其优越的性能而在视觉分类任务中变得很流行。但是,基于CNN的方法不考虑要分类的视觉数据的相关性。最近,图卷积网络(GCN)通过对可视数据中的成对关系进行建模来缓解此问题。视觉分类的实际任务通常必须解决数据中的众多复杂关系,这些关系不适合使用GCN进行图形结构建模。因此,探索视觉数据的潜在相关性至关重要。关于这个问题,我们提出了一个称为超图诱导卷积网络的框架,以探索深度神经网络中视觉数据的高阶相关性。首先,构造一个超图结构以表达视觉数据中的关系。然后,通过基于构造的超图的学习过程来优化高阶相关性。通过考虑数据中的高阶相关性来执行分类任务。因此,超图诱导卷积网络的卷积基于相应的高阶关系,并且网络上的优化使用每个数据并考虑数据的高阶相关性。为了评估拟议的超图诱导卷积网络框架,我们在三个视觉数据集上进行了实验:国立台湾大学3D模型数据集,普林斯顿形状基准测试和多视图RGB深度对象数据集。在所有数据集中的实验结果和比较结果表明,与最新方法相比,我们提出的超图诱导卷积网络的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号