In this work, we address the problem of implementing cooperative search in humanoid robots (NAOs). The robots are taught to recognise a number of objects and then use their RGB-D sensors (attached to their heads) to search their environment for these objects. When an object is found they have to move to the target position and to perform a cleaning task in these objects (also, the location of recognized objects can help navigation). The challenge is threefold: 1) navigation/exploration, 2) real-time object recognition and 3) cooperation. This work will show preliminary results in object recognition and briefly discuss the approaches that will be employed for the entire system. The scenario consists of a room in which some objects are spread (Figure 1 on the left). Initially, the robots (Figure 1 on the right) explore the environment until their RGB-D devices have detected objects. Then, the robots move to the objects, constantly trying to identify them. It is an assumption of our approach that these objects are located on tables or plane surfaces, so this strategy will help to detect potential objects before recognizing them. The robot has to extract the appropriate information from the point cloud, filter noise and correctly segment the objects. We are using RANSAC (RAndom SAmple Consensus) [1] to identify planes and VFH (View Point Feature Histogram) to collect a multidimensional descriptor (feature vector with 308 elements) that characterizes the object. The robots are, initially, completely unaware about the object's position. Thus, they cannot plan the best way to cooperatively distribute themselves in the environment, but they can model the influence of other robots as repulsive potential fields, in a purely reactive, though collaborative, way. Although a navigation approach based on RGB-D data is intended to be used, initially we will employ a ceiling camera (also used in [2]) that provides a global view of the environment.
展开▼