The on-line robot estimation position from measurements of self-mapped features is a class of problem called, in the robotics community, as Simultaneous Localization and Mapping (SLAM) problem, which is one of the fundamental problems in robotics. SLAM consists in incrementally building a consistent map of the environment and, at the same time, localizing the position of the robot while it explores its world. In this context, sensors such as laser and sonar rings for range measurement have been traditionally used to perform SLAM; more recently vision-based systems have also gained a great interest in the robotics community. Nevertheless the use of the auditory sensing for performing SLAM has been much less explored. In this work a Sound-Based SLAM system using a Delayed Inverse-Depth Feature Initialization is proposed where "sound sources" are used as map's features. Experimental results with simulations and with a real robot are presented in order to demonstrate the performance of the method.
展开▼