Extracting a point cloud of a real object from 3D surface measurement is a common task in various applications of computer vision, computer graphics, reverse engineering and etc. Usually, a single view is not enough to describe the whole object and multiple views of the surface are necessary. Typically the views are obtained from multiple 3D scanners or from a single scanner positioned at different locations and orientation. Each view is represented as partially overlapping dense point cloud. A problem of surface registration arises, i.e. the different views have to be located in a same coordinate system. In other words, the objective is to determine a rigid transformation for each partial view, in order to find the optimal alignment among all of them. In real cases, the point-to-point correspondences for the overlapping regions are calculated.
展开▼