The Implicant Network is a neural network model capable of storing an arbitrary boolean function F : {0, 1}n → {0, 1}. The difference from previous one-shot learning models is that the training algorithm compresses the positive set online with linear time and space requirements. The algorithm works by building a Sum Of Products (SOP) representation of the positive set as it is presented to the network. Since the minimum coverage of implicants is an NP-hard problem, the compression rate is not optimal at first but it is shown to increase rapidly as the positive set is shown over again.
展开▼
机译:隐式网络是一种神经网络模型,能够存储任意布尔函数F:{0,1} n sup>→{0,1}。与以前的单次学习模型的不同之处在于,该训练算法在线压缩具有线性时间和空间要求的正集。该算法通过构建正集合的乘积和(SOP)表示形式来工作,该正集合被呈现给网络。由于隐含的最小覆盖范围是一个NP难题,因此压缩率一开始并不是最佳的,但随着正集合的出现,压缩率会迅速增加。
展开▼