In human-robot interaction field, the robot is no longer considered as a tool but as audpartner, which supports the work of humans. Environments that feature the interactionudand collaboration of humans and robots present a number of challenges involving robotudlearning and interactive capabilities. In order to operate in these environments, the robotudmust not only be able to do, but also be able to interact and especially to ”understand”.udThis thesis proposes a unified probabilistic framework that allows a robot to developudbasic cognitive skills essential for collaboration. To this aim we embrace the idea of motorudsimulation - well established in cognitive science and neuroscience - in which the robotudreenacts in simulation its own internal models used for physically performing action. Thisudparticular view offers the possibility to unify apparently distinct cognitive phenomena suchudas learning, interaction, understanding and dialogue, just to name a few. Ideas presentedudhere are corroborated by experimental results performed both in simulation and on audhumanoid robotic platform.udThe first contribution in this direction is a robust Bayesian method to estimate (i.e.udlearn) the parameters of internal models by observing other skilled actors performingudgoal-directed actions. In addition to deriving a theoretically sound solution for the learningudproblem, our approach establishes theoretical links between Bayesian inference andudgradient-based optimization methods. Using the expectation propagation (EP) algorithm,uda similar algorithm is derived for multiple internal models scenario.udOnce learned, internal models are reused in simulation to ”understand” actions performedudby other actors, which is a necessary precondition for successful interaction. Weudhave proposed that action understanding can be cast as an approximate Bayesian inferenceudin which the covert activity of internal models produces hypotheses that are testedudin parallel through a sequential Monte Carlo approach. Here, approximate Bayesian inferenceudis offered as a plausible mechanistic implementation of the idea of motor simulationudmaking it feasible in real-time and with limited resources.udFinally, we have investigated how the robot can learn a grounded language modeludin order to be bootstrapped into communication. Features extracted from the learnedudinternal models, as well as descriptors of various perceptual categories, are fed into a noveludmulti-instance semi-supervised learning algorithm able to perform semantic clustering andudassociate words, either nouns or verbs, with their grounded meaning.
展开▼