Point cloud information is a convenient means to accomplish grasp detection in autonomous robotic grasping. However, the performance of various state-of-the-art grasp detection methods that use point cloud decreases significantly when the available point cloud information is incomplete and the background is complex, i.e. heterogeneous. To solve this problem, we propose a robust grasp detection method that demonstrates higher performance, especially with incomplete point clouds and complex backgrounds. We introduce a novel technique named 'visible point-cloud' - generated using the point cloud and pose (position and orientation) information of the sensor(s) - that helps to eliminate unsafe grasp candidates quickly and efficiently. The remaining grasp candidates are then classified and the best candidate is determined using a cost function. The effectiveness of the proposed method is shown experimentally using a 6-DoF robot arm equipped with a two-finger gripper with three different background settings: (a) steps, (b) pillars, and (c) table top. The results show that the proposed method is 1.20 times faster and has a 20 higher grasp success rate than a state-of-the-art method for a single point cloud camera. The results demonstrate that the proposed method significantly improves the performance of autonomous grasping even with incomplete point cloud and is robust for different backgrounds.
展开▼