An experimental task-understanding robotics system is reported that can simulate real assembly operations in an imaginary three-dimensional geometric space. The system is given as input a set of sentences from an instruction manual elucidating mechanical assembly operations. It can be considered that these sentences give a set of subgoals that must be attained among mechanical parts to be assembled. The system must solve them and translate them into more complete command sequences by referring to three-dimensional geometric models of mechanical parts and figures attached to these sentences. The ambiguity of natural language and the assumption that readers have common-sense knowledge about assembly operations make it difficult for the system to obtain a correct sequence of operations.
展开▼