首页> 外文期刊>Theoretical computer science >On the learnability of recursively enumerable languages from good examples
【24h】

On the learnability of recursively enumerable languages from good examples

机译:通过良好的例子论证递归枚举语言的可学习性

获取原文
获取原文并翻译 | 示例
           

摘要

The present paper investigates identification of indexed families L Of recursively enumerable languages from good examples. We distinguish class-presenting learning from good examples (the good examples have to be generated with respect to a hypothesis space having the same range as L) and class-comprising learning from good examples (the good examples have to be selected with respect to a hypothesis space comprising the range of L). A learner is required to learn a target language on every finite superset of the good examples for it, If the learners first and only conjecture is correct then the underlying learning model is referred to as finite identification from good examples and if the learner makes a finite number of incorrect conjectures before always outputting a correct one, the model is referred to as limit identification from good examples. In the context of class-preserving learning, it is shown that the learning power of finite and limit identification from good text examples coincide. When class comprising learning From good text examples is concerned, limit identification is strictly more powerful than finite learning, Furthermore, if learning from good informant examples is considered, limit identification is superior to finite identification in the class presenting as well as in the class-comprising case. Finally, we relate the models of learning from good examples to one another as well as to the standard learning models in the context of Gold-style language learning. (C) 2001 Elsevier Science B,V. All rights reserved. [References: 21]
机译:本文从良好的例子中研究了递归可枚举语言的索引族L的识别。我们将班级学习与良好示例区分开(必须针对具有与L相同范围的假设空间生成良好示例),而将包含班级的学习与良好示例区分开(必须针对a来选择良好示例假设空间包括L的范围。一个学习者需要在每个好的范例的有限超集上学习目标语言。如果学习者首先且只有猜想是正确的,则基础学习模型被称为从良好范例中进行的有限识别,并且如果该学习者做出了有限的定义在总是输出正确的猜想之前,由于错误猜想的数量众多,因此从良好的示例中将模型称为极限识别。在保持课堂学习的情况下,表明从好的文本示例中进行的有限和极限识别的学习能力是一致的。当涉及到从良好的文本示例中学习的班级时,极限识别严格地比有限学习更强大。此外,如果考虑从良好的举报人示例中学习,则极限识别在演示班级和班级中都优于有限识别。组成案例。最后,我们将学习范例的学习模型与相互之间的联系,以及在Gold风格的语言学习中与标准学习模型联系在一起。 (C)2001 Elsevier Science B,V。版权所有。 [参考:21]

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号