Imitative learning has recently piqued the interest of various fields including neuroscience, cognitive science and robotics. In computational behavior modeling and development, it promises an accessible framework for rapidly forming behavior models without tedious supervision or reinforcement. Given the availability of lowcost wearable sensors, the robustness of real-time perception algorithms and the feasibility of archiving large amounts of audio-visual data, it is possible to unobtrusively archive the daily activities of a human teacher and his responses to external stimuli. We combine this data acquisition/representation process with statistical learning machinery (hidden Markov models) as well as discriminative estimation algorithms to form a behavioral model of a human teacher directly from the data set. The resulting system learns audio-visual interactive behavior from the human and his environment to produce an interactive autonomous agent. The agent subsequently exhibits simple audio-visual behaviors that appear coupled to real-world test stimuli.
展开▼