This paper presents an approach to global self-localization for autonomous mobile robots using a region- and feature-based neural network. This approach categorizes discrete regions of space using mapped sonar data corrupted by noise of varied sources and ranges. The authors' approach is like optical character recognition (OCR) in that the mapped sonar data assumes the form of a character unique to that room. Hence, it is believed that an autonomous vehicle can determine which room it is in from sensory data gathered while exploring that room. With the help of receptive fields, some pre-processing, and a robust exploration routine, the solution becomes time-, translation- and rotation-invariant. The classification rate of this approach is comparable to the Kohonen based approach. Some pros and cons of both approaches are discussed.
展开▼