首页> 外文期刊>Neurocomputing >Fat-Fast VG-RAM WNN: A high performance approach
【24h】

Fat-Fast VG-RAM WNN: A high performance approach

机译:Fat-Fast VG-RAM WNN:一种高性能方法

获取原文
获取原文并翻译 | 示例
       

摘要

The Virtual Generalizing Random Access Memory Weightless Neural Network (VG-RAM WNN) is a type of WNN that only requires storage capacity proportional to the training set. As such, it is an effective machine learning technique that offers simple implementation and fast training - it can be made in one shot. However, the VG-RAM WNN test time for applications that require many training samples can be large, since it increases with the size of the memory of each neuron. In this paper, we present Fat-Fast VG-RAM WNNs. Fat-Fast VG-RAM WNNs employ multi-index chained hashing for fast neuron memory search. Our chained hashing technique increases the VG-RAM memory consumption (fat) but reduces test time substantially (fast), while keeping most of its machine learning performance. To address the memory consumption problem, we employ a data clustering technique to reduce the overall size of the neurons' memory. This can be achieved by replacing clusters of neurons' memory by their respective centroid values. With our approach, we were able to reduce VG-RAM WNN test time and memory footprint, while maintaining a high and acceptable machine learning performance. We performed experiments with the Fat-Fast VG-RAM WNN applied to two recognition problems: (i) handwritten digit recognition and (ii) traffic sign recognition. Our experimental results showed that, in both recognition problems, our new VG-RAM WNN approach was able to run three orders of magnitude faster and consume two orders of magnitude less memory than standard VG-RAM, while experiencing only a small reduction in recognition performance. (C) 2015 Elsevier B.V. All rights reserved.
机译:虚拟通用随机存取存储器失重神经网络(VG-RAM WNN)是一种WNN,仅需要与训练集成比例的存储容量。因此,这是一种有效的机器学习技术,可提供简单的实现和快速的培训-可以一次性完成。但是,对于需要许多训练样本的应用程序,VG-RAM WNN的测试时间可能会很大,因为它会随着每个神经元内存大小的增加而增加。在本文中,我们提出了快速的VG-RAM WNN。 Fat-Fast VG-RAM WNN使用多索引链式散列进行快速神经元内存搜索。我们的链式哈希技术增加了VG-RAM的内存消耗(胖),但是却大幅减少了测试时间(快速),同时保持了其大多数机器学习性能。为了解决内存消耗问题,我们采用了数据聚类技术来减少神经元内存的整体大小。这可以通过将神经元记忆簇替换为其各自的质心值来实现。通过我们的方法,我们能够减少VG-RAM WNN测试时间和内存占用,同时保持较高且可接受的机器学习性能。我们将Fat-Fast VG-RAM WNN应用于两个识别问题进行了实验:(i)手写数字识别和(ii)交通标志识别。我们的实验结果表明,在这两个识别问题中,我们的新VG-RAM WNN方法与标准VG-RAM相比,运行速度提高了三个数量级,并且消耗的内存减少了两个数量级,而识别性能仅出现了小幅下降。 (C)2015 Elsevier B.V.保留所有权利。

著录项

  • 来源
    《Neurocomputing》 |2016年第26期|56-69|共14页
  • 作者单位

    Univ Fed Espirito Santo, Dept Informat, Av Fernando Ferrari 541, BR-29075910 Vitoria, ES, Brazil|City Univ London, Dept Comp Sci, Northampton Sq, London EC1V 0HB, England;

    Univ Fed Espirito Santo, Dept Informat, Av Fernando Ferrari 541, BR-29075910 Vitoria, ES, Brazil;

    Univ Fed Espirito Santo, Dept Informat, Av Fernando Ferrari 541, BR-29075910 Vitoria, ES, Brazil;

    Univ Fed Espirito Santo, Dept Comp & Eletron, Rodovia BR 101 Norte Km 60, BR-29932540 Sao Mateus, ES, Brazil;

    Univ Fed Espirito Santo, Dept Informat, Av Fernando Ferrari 541, BR-29075910 Vitoria, ES, Brazil;

    City Univ London, Dept Comp Sci, Northampton Sq, London EC1V 0HB, England;

    Univ Fed Espirito Santo, Dept Informat, Av Fernando Ferrari 541, BR-29075910 Vitoria, ES, Brazil;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    WNN; VG-RAM; Multi-index chained hashing; Data clustering; Traffic sign recognition; Handwritten digit recognition;

    机译:WNN;VG-RAM;多索引链哈希;数据聚类;交通标志识别;手写数字识别;
  • 入库时间 2022-08-18 02:06:26

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号