Recently, reinforcement learning methods have been successfully applied to various problems where latent rules cannot be observed nor acquired manually. Q-learning is one of the effective methods for reinforcement learning. One of the simplest ways to estimate Q-values is to look up a Q-table, but it cannot deal with continuous-valued inputs and outputs. We have already proposed a framework of reinforcement learning with Condition Reduced Fuzzy Rules (CRFRs) where Q-values are interpolated by the use of fuzzy inference. In this paper; we apply C4.5 algorithm to integrate some fuzzy rules learned by reinforcement learning and introduce the notion of "boundaries of motion" for chunking motion sequences into an action.
展开▼