The paper presents a system recognizing manipulative hand gestures like grasping, moving, holding an object with both hands, and extending or shortening of the object in the virtual world using task knowledge. The authors use two kinds of task knowledge. One is represented by a state transition diagram, each state of which indicates possible gestures at the next moment. Image features obtained from extracted hand regions are used to judge state transition. When one uses a gesture recognition system, one sometimes moves the hands unintentionally. To solve this problem, the system has a rest state in the state transition diagram. All unintentional actions are considered as taking a rest and ignored. In addition, the system can recognize collaborative gestures with both hands. They are expressed in a single state so that the complexity in combination of gestures of each hand can be avoided. The second type of knowledge is the situational knowledge to help a user to relieve his/her burden of specifying details about the selection of a target object and the positional relationships of the objects. The vision system can give only limited spatial resolution. Thus, indicating exact position by hand gestures alone is sometimes difficult. This knowledge assists the user in such cases. They have realized an experimental human interface system. Operational experiments show promising results.
展开▼