首页> 外文期刊>Cognitive Systems Research >Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience - An initial exploration
【24h】

Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience - An initial exploration

机译:应用Deutsch对人工智能和神经科学的良好解释的概念 - 初步探索

获取原文
获取原文并翻译 | 示例
           

摘要

Artificial intelligence has made great strides since the deep learning revolution, but AI systems remain incapable of learning principles and rules which allow them to extrapolate outside of their training data to new situations. For inspiration we look to the domain of science, where scientists have been able to develop theories which show remarkable ability to extrapolate and sometimes even predict the existence of phenomena which have never been observed before. According to David Deutsch, this type of extrapolation, which he calls "reach", is due to scientific theories being hard to vary. In this work we investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning such as the bias-variance trade-off and Occam's razor. We distinguish internal variability, how much a model/theory can be varied internally while still yielding the same predictions, with external variability, which is how much a model must be varied to predict new, out-of-distribution data. We discuss how to measure internal variability using the notion of the Rashomon set and how to measure external variability using Kolmogorov complexity. We explore what role hard-to-vary explanations play in intelligence by looking at the human brain, the only example of highly general purpose intelligence known. We distinguish two learning systems in the brain - the first operates similar to deep learning and likely underlies most of perception while the second is a more creative system capable of generating hard-to-vary models and explanations of the world. We make contact with Popperian epistemology which suggests that the generation of scientific theories is a not an inductive process but rather an evolutionary process which proceeds through conjecture and refutation. We argue that figuring out how replicate this second system, which is capable of generating hard-to-vary explanations, is a key challenge which needs to be solved in order to realize artificial general intelligence. (C) 2020 Elsevier B.V. All rights reserved.
机译:人工智能自深远的学习革命以来取得了巨大进展,但AI系统仍然无法学习原则和规则,使他们允许他们在其培训数据之外推翻新情况。为了灵感,我们向科学领域寻求科学家,科学家能够制定出色的理论,这表现出显着的推断能力,有时甚至预测以前从未观察过的现象。根据David Deutsch的说法,这种类型的外推,他称之为“到达”,是由于科学理论难以变化。在这项工作中,我们调查德国的艰难的原则以及如何与更深入的学习中的更正式化的原则有关,例如偏见方差权衡和冬季的剃刀。我们区分内部变异性,型号/理论可以内部可以在内部变化,同时仍然产生相同的预测,具有外部可变性,这是必须改变模型以预测新的,分发的数据的多大。我们讨论如何使用Rashomon集的概念来测量内部变异性以及如何使用Kolmogorov复杂度测量外部变异性。我们探讨通过观察人类大脑的智力地区的难以变化的解释,这是一个高度通用的智慧的唯一例子。我们区分了大脑中的两个学习系统 - 第一个类似于深度学习的操作,并且可能是大部分感知的基础,而第二个是一种更具创造性的系统,能够产生世界难以实现的模型和解释。我们与Popperian认识学接触,这表明科学理论的产生是不是一种归纳过程,而是一种进化过程,而是通过猜想和驳斥进行的进化过程。我们认为,弄清楚这种能够产生难以产生的解释的第二个系统是一种重点挑战,是为了实现人为的一般情报。 (c)2020 Elsevier B.v.保留所有权利。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号