Training robot or agent behaviors by example is an attractive alternative to directly coding them. However training complex behaviors can be challenging, particularly when it involves interactive behaviors involving multiple agents. We present a novel hierarchical learning from demonstration system which can be used to train both single-agent and scalable cooperative multiagent behaviors. The methodology applies manual task decomposition to break the complex training problem into simpler parts, then solves the problem by iteratively training each part. We discuss our application of this method to multiagent problems in the humanoid RoboCup competition, and apply the technique to the keepaway soccer problem in the RoboCup Soccer Simulator.
展开▼