A Recurrent Neural Network (RNN) has reflexive structures and an ability of learning Finite State Machines (FSMs). It is known that a state graph of an FSM is extracted in the state space of the trained RNN optimally. In this paper, we use an RNN in order to learn protein secondary structures (alpha-helix and etc). We propose learning methods which reflect properties of protein structures and RNNs, and show that a grammatical structure of an amino acid sequence is acquired in the same way.
展开▼