A sign language recognition system is required to use informationfrom both global features, such as hand movement and location, and localfeatures, such as hand shape and orientation. We present an adequatelocal feature recognizer for a sign language recognition system. Ourbasic approach is to represent the hand images extracted fromsign-language images as symbols which correspond to clusters by aclustering technique. The clusters are created from a training set ofextracted hand images so that a similar appearance can be classifiedinto the same cluster on an eigenspace. The experimental resultsindicate that our system can recognize a sign language word even intwo-handed and hand-to-hand contact cases
展开▼