This paper presents a handheld 3D vision-based scanner for small objects by using Kinect. It is different from the previous color-glove-based approaches which require segmenting the target object. First,we eliminate the noises and the outliers caused by holding hands. Second,we apply Kinect-fusion algorithm and truncated signed distance function(TSDF) to represent 3D surfaces. Third,we propose a modified integration strategy to eliminate the hand effect. Fourth,we take advantage of the parallel computation of GPUs for real-time operation. The major contributions of this paper are(1) the registration precision is improved,(2) the offline amendment and loop closure operation are not required,and(3) concave 3D object reconstruction is feasible. Index TermsHandheld 3D scanning,Kinect-fusion,Truncated signed distance function(TSDF). 1. Introduction Recently,the sensor-based 3D model reconstruction methods have been proposed[1]. The sensor devices have different properties so that the 3D reconstruction algorithms vary accordingly. The commonly used sensor devices are time-of-flight(To F) cameras[2]-[4],laser scanners[5],and structured light scanners[6],[7]. Lasers have gained a reputation for accuracy; however,care must be taken to use eye-safe lasers when operating in proximity to humans. For an interactive system,the structured light scanner which is basically a passive vision-based sensor device is superior because it provides a 2D depth image per frame and is more accurate than that of a To F camera. Here,we present a real-time 3D scanner using the depth images captured by Kinect.
展开▼