In this paper we describe adapting recognition of shootable situations for an agent in a RoboCup soccer simulation game. An agent needs to adapt its recognition of shootable situations to their opponents in a game, because the recognition whether a shot will succeed or not depends on abilities of opponents' interception. When an agent tries to adapt by learning in a game, the agent faces a problem of a limitation of its shooting chances. We apply LEO (Learning from Experience and Observation) to the agents to let them increase their learning opportunities indirectly by seeing teammate agents' shots. LEO is a learning method for multi-agent environments which we proposed. LEO consists of "Learning from Observation" (LO) and "Learning from Experience" (LE). In the experiments of RoboCup Soccer Simulation games, the agents with the LEO can improve a success rate of "shooting" action to 0.12 from 0.04 (non-learning) and 0.06(LE only).
展开▼