In order to establish a strategy to learn robotic grasping behaviors with external vision (e.g. cameras on a robot head) and tactile perception obtained by FingerVision, we develop a grasp adaptation control that grasps unknown objects with adequate grasping force. FingerVision proposed by Yamaguchi and Atkeson [1,2] is a vision-based tactile sensor that gives robots a tactile sensation and visual information of nearby objects. When grasping objects, humans are combining vision and tactile perception. However, use of tactile perception is not considered as essential in robotics. For example, in the recent work of learning robotic grasping with deep learning [3], robots learned grasping without tactile sensing. This was possible because there is a consistent relation between the state before grasping (visual scene of the object and the gripper) including the grasping parameters and the outcome of grasping. Tactile sensing is intermediate information, which is not necessary to use in learning grasping behavior.
展开▼