We address the problem of vision based grasp affordance learning and prediction on novel objects by proposing a new semi-local shape-based descriptor, the Sliced Pineapple Grid Feature (SPGF). The primary characteristic of the feature is the ability to encode semantically distinct surface structures, such as "walls", "edges" and "rims", that show particular potential as a primer for grasp affordance learning and prediction. When the SPGF feature is used in combination with a probabilistic grasp affordance learning approach, we are able to achieve grasp success-rates of up to 84% for a varied object set of three classes and up to 96% for class specific objects.
展开▼