Designing the representation languages for the input and output of a learning algorithm is the hardest task within machine learning applications. Transforming the given representation of observations into a well-suited language L_E may ease learning problem. Learnability is defined with respect to the representation of the output of learning, L_H. If the predictive accuracy is the only criterion for the success of learning, the choice of L_H means to find the hypothesis space with most easily learnable concepts, which contains to solution. Additional criteria for the success of learning such as comprehensibility and embeddedness may ask for transformations of L_H such that users can easily interpret and other systems can easily exploit the learning results. Designing a language L_H that is optimal with respect to all the criteria is too difficult a task. Instead, we design families of representations, where each family member is well suited for a particular set of requirements, and implement transformations between the representations. In this paper, we discuss a representation family of Horn logic. Work on tailoring representations is illustrated by a robot application.
展开▼