首页> 外文会议>International Work-Conference on Artificial Neural Networks >Exploring Classification, Clustering, and Its Limits in a Compressed Hidden Space of a Single Layer Neural Network with Random Weights
【24h】

Exploring Classification, Clustering, and Its Limits in a Compressed Hidden Space of a Single Layer Neural Network with Random Weights

机译:探索具有随机权重的单层神经网络的压缩隐藏空间中的分类,聚类及其限制

获取原文

摘要

Classification in the hidden layer of a single layer neural network with random weights has shown high accuracy in recent experimental studies. We further explore its classification and clustering performance in a compressed hidden space on a large cohort of datasets from the UCI machine learning archive. We compress the hidden layer with a simple bit-encoding that yields a comparable error to the original hidden layer thus reducing memory requirements and allowing to study up to a million random nodes. In comparison to the uncompressed hidden space we find classification error with the linear support vector machine to be statistically indistinguishable from that, of the network's compressed layer. We see that test error of the linear support vector machine in the compressed hidden layer improves marginally after 10,000 nodes and even rises when we reach one million nodes. We show that k-means clustering has an improved adjusted rand index and purity in the compressed hidden space compared to the original input space but only the latter by a statistically significant margin. We also see that, semi-supervised k-nearest neighbor improves by a statistically significant margin when only 10% of labels are available. Finally we show that different classifiers have statistically significant lower error in the compressed hidden layer than the original space with the linear support vector machine reaching the lowest error. Overall our experiments show that while classification in our compressed hidden layer can achieve a low error competitive to the original space there is a saturation point beyond which the error does not improve, and that clustering and semi-supervised is better in the compressed hidden layer by a small yet statistically significant margin.
机译:在最近的实验研究中,具有随机权重的单层神经网络的隐藏层中的分类显示出很高的准确性。我们将在来自UCI机器学习档案库的大量数据集中的压缩隐藏空间中进一步探索其分类和聚类性能。我们使用简单的位编码压缩隐藏层,从而产生与原始隐藏层类似的错误,从而减少了内存需求,并允许研究多达一百万个随机节点。与未压缩的隐藏空间相比,我们发现线性支持向量机的分类误差与网络的压缩层的统计误差在统计上是无法区分的。我们看到,线性支持向量机在压缩隐藏层中的测试误差在10,000个节点后略有改善,甚至在达到一百万个节点时甚至有所上升。我们显示,与原始输入空间相比,k均值聚类在压缩的隐藏空间中具有改进的调整后的rand指数和纯度,但只有后者具有统计学上的显着性。我们还看到,只有10%的标签可用时,半监督的k最近邻居在统计学上有显着的提高。最后,我们证明,在线性支持向量机达到最低误差的情况下,不同的分类器在压缩隐藏层中的统计误差要比原始空间低得多。总的来说,我们的实验表明,尽管在压缩隐藏层中进行分类可以实现与原始空间相比较低的误差,但存在一个饱和点,误差无法改善,并且在压缩隐藏层中聚类和半监督效果更好。很小但具有统计意义的边距。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号