In this work the task of classifying natural language sentences using recurrent neural networks is considered. The goal is the classification of the sentences as grammatical or ungrammatical. An acceptable classification percentage was achieved, using encoded natural language sentences as examples to train a recurrent neural network. This encoding is based on the linguistic theory of Government and Binding. The behaviour of the recurrent neural network as a dynamical system is analyzed to extract finite automata that represent in some way the grammar of the language. A classifier system was developed to reach these goals, using the Back-propagation Through Time algorithm to train the neural net. The clustering algorithm Growing Neural Gas was used in the extraction of automata.
展开▼