In this paper we present a novel approach to estimate the position of objects tracked by a team of robots. Moving objects are commonly modeled in an egocentric frame of reference, because this is sufficient for most robot tasks as following an object, and it is independent of the robots localization within its environment. But for multiple robots, to communicate and to cooperate the robots have to agree on an allocentric frame of reference. Instead of transforming egocentric models into allocentric ones by using self localization information, we will show how relations between different objects within the same camera image can be used as a basis for estimating an object's position. The spacial relation of objects with respect to stationary objects yields several advantages: a) Errors in feature detections are correlated. The error of relative positions of objects within a single camera frame is comparably small. b) The information is independent of robot localization and odometry. c) Object relations can help to detect inconsistent sensor data. We present experimental evidence that shows how two non-localized robots are capable to infer the position of an object by communication on a RoboCup Four-Legged soccer field.
展开▼