Robot grasping is a critical and difficult problem in robotics. The problem of simply finding a stable grasp is difficult enough, but to perform a useful grasp, we must also consider other aspects of the task: the object, its properties, and any task-related constraints. The choice of grasping region is highly dependent on the category of object, and the automated prediction of object category is the problem we focus on here. In this paper, we consider manifold information and semantic object parts in a graph kernel to predict categories of a large variety of household objects such as cups, pots, pans, bottles, and various tools. The similarity based category prediction is achieved by employing propagation kernels, a recently introduced graph kernel for partially labeled graphs, on graph representations of 3D point clouds of objects. Our work highlights the importance of moving towards the use of structured machine learning approaches in order to achieve the dream of autonomous and intelligent robot grasping: learning to map low-level visual features to good grasping points under consideration of object-task affordances and high-level world knowledge. We evaluate propagation kernels for object category prediction on a (synthetic) dataset of 41 objects with 11 categories and a dataset of 126 point clouds derived from laser range data with part labels estimated by a part detector. Further, we point out the benefit of leveraging kernel-based object category distributions for task-dependent robot grasping.
展开▼