【24h】

On the fusion and transference of knowledge. I

机译:论知识的融合与转移。一世

获取原文

摘要

Contemporary neural architectures having one or more hidden layers suffer from the same deficiencies that genetic algorithms and methodologies for non-trivial automatic programming do; namely, they cannot exploit inherent domain symmetries for the transference of knowledge from an application of lesser to greater rank, or across similar applications. As a direct consequence, no ensemble of contemporary neural architectures allows for the effective codification and transference of knowledge within a society of individuals (i.e., swarm knowledge). These deficiencies stem from the fact that contemporary neural architectures cannot reason symbolically using heuristic ontologies. They cannot directly provide symbolic explanations of what was learned for purposes of inspection and verification. Moreover, they do not allow the knowledge engineer to precondition the internal feature space through the application of domain-specific modeling languages. A symbolic representation can support the heuristic evolution of an ensemble of neural architectures. Each neural network in the ensemble imbues a hidden layer and for this reason is NP-hard in its learning performance. It may be argued that the internal use of a neat representation subsumes the heuristic evolution of a scruffy one. It follows that there is a duality of representation under transformation. The goal of AI then is to find symbolic representations, transformations, and associated heuristic ontologies. This paper provides an introduction to this quest. Consider the game of chess for example. If a neural network or symbolic heuristic is used to evaluate board positions, then the best found iterate (i.e., of weights or symbols) serves as a starting point for iterative refinement. This paper addresses the ordering and similarity of the training instances in refining subsequent iterates. If we fix the learning technology, then we need to focus on reducing the problem, composing intermediate results, and transferring the results to a similar domain. For example, moving just a bishop against one opposing piece is a reduction, moving a bishop and say a rook against one opposing piece a composition, and moving a queen against one or more opposing pieces a transference. The training sets must be mutually orthogonal, or random to maximize the learned content. Learning what to present and when involves self-reference and this necessarily implies a heuristic approach.
机译:具有一个或多个隐藏层的当代神经体系结构遭受着与用于非平凡自动编程的遗传算法和方法相同的缺陷。也就是说,他们无法利用固有的域对称性来将知识从等级较低的应用程序转移到等级较高的应用程序,或跨相似的应用程序进行知识转移。作为直接的结果,当代神经体系结构的任何集合都不允许在个人社会中有效地进行知识的编纂和转移(即,成群的知识)。这些缺陷源于以下事实:当代神经体系结构无法使用启发式本体进行象征性推理。他们不能直接提供对检查和验证目的所学知识的符号解释。而且,它们不允许知识工程师通过应用领域特定的建模语言来预先设置内部特征空间。符号表示可以支持神经体系结构集成体的启发式演变。集成中的每个神经网络都包含一个隐藏层,因此,其学习性能很难达到NP。可以说,整洁的表示形式的内部使用包含了r懒的表示形式的启发式演变。随之而来的是,在转化中存在代表的双重性。 AI的目标是找到符号表示,转换和相关的启发式本体。本文提供了对此任务的介绍。以国际象棋为例。如果使用神经网络或符号启发式方法评估电路板位置,则找到的最佳迭代(即权重或符号)将作为迭代优化的起点。本文讨论了在细化后续迭代中训练实例的顺序和相似性。如果我们修复学习技术,那么我们需要集中精力减少问题,撰写中间结果并将结果转移到相似的领域。例如,仅将主教靠在一个对立的棋子上是一种减少,将主教和白嘴鸦靠在一个对立的棋子上是一种合成,而将女王/王后靠在一个或多个对立的棋子上则是转移。训练集必须相互正交,或者是随机的,以使学习的内容最大化。学习呈现什么以及何时呈现涉及自我参照,这必然意味着一种启发式方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号