Laser scanners are widely used with mobile robots for simultaneous localization and mapping. The laser data can be used to build a dense map or a feature based map, where the features are extracted from laser scanner (LIDAR) data. On the other hand, diverse approaches exist, which use one or several cameras to extract features from the environment. In this paper, an approach to combine these sensors in a unified framework is proposed. The combination is achieved by fusion of sensor data as well as incorporation of grid and feature based map representations.
展开▼