An imitation game for learning action categories is proposed. It serves to invent and share a repertoire of action categories in a population of robotic agents. The agents start without any action categories built in, but they learn by imitating other agents and gradually invent categories for the actions they observe and execute. In other words, no action categories need to be defined beforehand, nor do agents have to be assigned fixed roles. If the population only consists of two agents, the repertoire of action categories can be learnt without feedback about the outcome of the game between both agents.
展开▼