【24h】

Using Imagery to Simplify Perceptual Abstraction in Reinforcement Learning Agents

机译:使用图像简化强化学习代理中的感知抽象

获取原文

摘要

In this paper, we consider the problem of reinforcement learning in spatial tasks. These tasks have many states that can be aggregated together to improve learning efficiency. In an agent, this aggregation can take the form of selecting appropriate perceptual processes to arrive at a qualitative abstraction of the underlying continuous state. However, for arbitrary problems, an agent is unlikely to have the perceptual processes necessary to discriminate all relevant states in terms of such an abstraction. To help compensate for this, reinforcement learning can be integrated with an imagery system, where simple models of physical processes are applied within a low-level perceptual representation to predict the state resulting from an action. Rather than abstracting the current state, abstraction can be applied to the predicted next state. Formally, it is shown that this integration broadens the class of perceptual abstraction methods that can be used while preserving the underlying problem. Empirically, it is shown that this approach can be used in complex domains, and can be beneficial even when formal requirements are not met.
机译:在本文中,我们考虑了空间任务中强化学习的问题。这些任务具有许多状态,可以将它们汇总在一起以提高学习效率。在代理中,这种聚集可以采取选择适当的感知过程的形式,以对基础连续状态进行定性抽象。但是,对于任意问题,代理程序不太可能具有根据这种抽象​​来区分所有相关状态所必需的感知过程。为了弥补这一点,可以将强化学习与影像系统集成在一起,在该影像系统中,将物理过程的简单模型应用于低水平的感知表示中,以预测动作产生的状态。可以将抽象应用于预测的下一个状态,而不是抽象当前状态。从形式上正式表明,这种集成拓宽了在保留潜在问题的同时可以使用的感知抽象方法的种类。从经验上可以看出,这种方法可以用于复杂的领域,即使在不满足正式要求的情况下也可能是有益的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号