首页> 外文期刊>ACM transactions on autonomous and adaptive systems >Hyper-Learning Algorithms for Online Evolution of Robot Controllers
【24h】

Hyper-Learning Algorithms for Online Evolution of Robot Controllers

机译:机器人控制器在线进化的超学习算法

获取原文
获取原文并翻译 | 示例

摘要

A long-standing goal in artificial intelligence and robotics is synthesising agents that can effectively learn and adapt throughout their lifetime. One open-ended approach to behaviour learning in autonomous robots is online evolution, which is part of the evolutionary robotics field of research. In online evolution approaches, an evolutionary algorithm is executed on the robots during task execution, which enables continuous optimisation and adaptation of behaviour. Despite the potential for automatic behaviour learning, online evolution has not been widely adopted because it often requires several hours or days to synthesise solutions to a given task. In this respect, research in the field has failed to develop a prevalent algorithm able to effectively synthesise solutions to a large number of different tasks in a timely manner. Rather than focusing on a single algorithm, we argue for more general mechanisms that can combine the benefits of different algorithms to increase the performance of online evolution of robot controllers. We conduct a comprehensive assessment of a novel approach called online hyper-evolution (OHE). Robots executing OHE use the different sources of feedback information traditionally associated with controller evaluation to find effective evolutionary algorithms during task execution. First, we study two approaches: OHE-fitness, which uses the fitness score of controllers as the criterion to select promising algorithms over time, and OHE-diversity, which relies on the behavioural diversity of controllers for algorithm selection. We then propose a novel class of techniques called OHE-hybrid, which combine diversity and fitness to search for suitable algorithms. In addition to their effectiveness at selecting suitable algorithms, the different OHE approaches are evaluated for their ability to construct algorithms by controlling which algorithmic components should be employed for controller generation (e.g., mutation, crossover, among others), an unprecedented approach in evolutionary robotics. Results show that OHE (i) facilitates the evolution of controllers with high performance, (ii) can increase effectiveness at different stages of evolution by combining the benefits of multiple algorithms over time, and (iii) can be effectively applied to construct new algorithms during task execution. Overall, our study shows that OHE is a powerful new paradigm that allows robots to improve their learning process as they operate in the task environment.
机译:人工智能和机器人技术的长期目标是合成代理,这些代理可以在整个生命周期中有效学习和适应。自主机器人行为学习的一种开放式方法是在线进化,它是进化机器人技术研究领域的一部分。在在线进化方法中,在任务执行过程中在机器人上执行了进化算法,这使得能够连续优化和调整行为。尽管有自动行为学习的潜力,但在线进化尚未得到广泛采用,因为在线合成通常需要数小时或数天才能综合完成给定任务的解决方案。在这方面,该领域的研究未能开发能够有效地及时合成针对大量不同任务的解决方案的流行算法。我们不关注单一算法,而是寻求更通用的机制,这些机制可以结合不同算法的优势来提高机器人控制器的在线进化性能。我们对一种称为在线超进化(OHE)的新颖方法进行了全面评估。执行OHE的机器人使用传统上与控制器评估相关的反馈信息的不同来源来在任务执行过程中找到有效的进化算法。首先,我们研究两种方法:OHE-fitness(使用控制器的适应性评分作为选择随时间推移的有希望算法的标准)和OHE-diversity(依赖于控制器的行为多样性来进行算法选择)。然后,我们提出了一类称为OHE-hybrid的新颖技术,该技术结合了多样性和适用性来寻找合适的算法。除了在选择合适算法方面的有效性外,还通过控制哪些算法组件应用于控制器生成(例如,突变,交叉等)来评估不同的OHE方法构造算法的能力,这是进化型机器人技术中前所未有的方法。结果表明,OHE(i)促进了高性能控制器的演进;(ii)通过结合多种算法的长期优势,可以在演进的不同阶段提高有效性;(iii)可以有效地应用于构建新的算法。任务执行。总体而言,我们的研究表明,OHE是强大的新范例,可让机器人在任务环境中操作时改善其学习过程。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号