To effectively behave within ever-changing environments,biological agents must learn and act at varying hierarchicallevels such that a complex task may be broken down intomore tractable subtasks. Hierarchical reinforcement learning(HRL) is a computational framework that provides an understandingof this process by combining sequential actions intoone temporally extended unit called an option. However,there are still open questions within the HRL framework,including how options are formed and how HRL mechanismsmight be realized within the brain. In this review, we proposethat the existing human motor sequence literature can aid inunderstanding both of these questions. We give specificemphasis to visuomotor sequence learning tasks such as thediscrete sequence production task and the M × N (M steps ×N sets) task to understand how hierarchical learning andbehavior manifest across sequential action tasks as well as howthe dorsal cortical–subcortical circuitry could support this kindof behavior. This review highlights how motor chunks within amotor sequence can function as HRL options. Furthermore, weaim to merge findings from motor sequence literature with reinforcementlearning perspectives to inform experimental designin each respective subfield.
展开▼