Abstract: For a robot to operate intelligently under sensor control in the real world, it must interpret its sensory inputs to create a model of its environment. Because the data is generally incomplete and errorful, additional knowledge must be applied to derive useful interpretations. One source of such data is knowledge about the sensors themselves; the ideal data returned from a sensor is a function of the environment and the pose and other parameters of the sensor (such as the focal length of a camera). The actual result is some corruption of the ideal result by noise or other error processes. In this work, the functions 'computed' by a range sensor are explicitly represented as geometric objects and relationships in the 3D FORM geometric reasoning system. Models for the output of independent edge segmentation and surface segmentation are described, and unified into an overall range-object model. In addition, visibility and projection relationships are used to predict the visibility of hypothesized object parts, and to refine the estimates of the cameras' positions. The resulting object description can be specialized, completed, and matched with other objects using existing 3D FORM capabilities.!20
展开▼