In this paper, we propose a hierarchical reinforcement learning method which enables a learner to learn tasks in a high-dimensional state space. In the upper level, the learner coarsely explores the low-dimensional state space. In the lower level, the learner finely explores the high-dimensional state space. Specifically, the learner learns to set up appropriate subgoals for the task in the upper level, and learns to achieve the subgoals in the lower level. As an example task, we choose a stand-up task involving a two-joint three-link robot. This robot has a ten-dimensional state space. The robot learns to rind subgoal postures in the upper level, and to achieve these subgoal postures in the lower level. Simulation results show that the hierarchical architecture acceralates the learning of the robot to stand up.
展开▼