首页> 外文会议>International Work-Conference on Artificial Neural Networks >Exploring Classification, Clustering, and Its Limits in a Compressed Hidden Space of a Single Layer Neural Network with Random Weights
【24h】

Exploring Classification, Clustering, and Its Limits in a Compressed Hidden Space of a Single Layer Neural Network with Random Weights

机译:随机重量探索单层神经网络压缩隐藏空间中的分类,聚类及其限制

获取原文
获取外文期刊封面目录资料

摘要

Classification in the hidden layer of a single layer neural network with random weights has shown high accuracy in recent experimental studies. We further explore its classification and clustering performance in a compressed hidden space on a large cohort of datasets from the UCI machine learning archive. We compress the hidden layer with a simple bit-encoding that yields a comparable error to the original hidden layer thus reducing memory requirements and allowing to study up to a million random nodes. In comparison to the uncompressed hidden space we find classification error with the linear support vector machine to be statistically indistinguishable from that, of the network's compressed layer. We see that test error of the linear support vector machine in the compressed hidden layer improves marginally after 10,000 nodes and even rises when we reach one million nodes. We show that k-means clustering has an improved adjusted rand index and purity in the compressed hidden space compared to the original input space but only the latter by a statistically significant margin. We also see that, semi-supervised k-nearest neighbor improves by a statistically significant margin when only 10% of labels are available. Finally we show that different classifiers have statistically significant lower error in the compressed hidden layer than the original space with the linear support vector machine reaching the lowest error. Overall our experiments show that while classification in our compressed hidden layer can achieve a low error competitive to the original space there is a saturation point beyond which the error does not improve, and that clustering and semi-supervised is better in the compressed hidden layer by a small yet statistically significant margin.
机译:在具有随机重量的单层神经网络的隐藏层中的分类在最近的实验研究中显示了高精度。我们在从UCI机器学习档案中的大型数据集上的压缩隐藏空间中进一步探讨了其在压缩隐藏空间中的分类和聚类性能。我们用一个简单的位编码压缩隐藏层,对原始隐藏层产生了类似的错误,从而减少了内存要求并允许研究高达一百万的随机节点。与未压缩的隐藏空间相比,我们发现线性支持向量机的分类错误与网络的压缩层的统计上无法区分。我们看到压缩隐藏层中的线性支持向量机的测试错误在10,000个节点后略微提高,甚至在达到一百万节点时甚至升高。我们表明K-means集群与原始输入空间相比,压缩隐藏空间中的调整后的rand指数和纯度有一个改进的调整rand指数和纯度,但只有后者仅通过统计上显着的边距。我们还看到,当只有10%的标签可用时,半监控的K-Collect邻居通过统计上显着的边距提高。最后,我们显示不同的分类器在压缩隐藏层中具有比原始空间在达到最低误差的最低误差的原始空间中具有统计上显着的较低误差。总的来说,我们的实验表明,虽然在我们的压缩隐藏层中的分类可以实现对原始空间竞争的低误差时,存在饱和点,超出了错误不会改善的饱和点,并且该聚类和半监控在压缩隐藏层中更好小且统计学上的严重余量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号