This paper describes one of the assistance methods for annotation tasks of sign language words using binaryaction segmentation. The binary action segmentation divides a sign video into binary units, which correspondto during sign and static posture. At this time, the user's annotation tasks can be reduced from the full-manualwork to inputting labels and correction of the segmented units. The proposed binary action segmentation iscomposed of Support Vector Machine and Graphcuts. The trained Support Vector Machine classifies each frameinto "Motion" or "Pause", and Graphcuts refines the initial segmentation. We evaluated the proposed methodwith a Japanese sign language words database. The database includes 92 Japanese sign language words whichare signed by ten native signers. The total number of videos is 4,590, and 3,800 videos of 76 words except forrecording and sign errors are used for the evaluation. The proposed method achieves comparable result with asmaller amount of training data than the previous method. Moreover, the work reduction ratios of annotationtasks using an annotation interface were 26:17%, 26:34%, and 17:88% for the sets whose the numbers of segmentedunits were 2; 3, and 4, respectively.
展开▼