Feedforward neural network models may be viewed as approximating nonlinear functions connecting inputs to outputs. We analyzed the mechanism of function approximations underlying learning of first and second person pronouns by the cascade correlation (CC) network. The CC network dynamically grows nets to approximate increasingly more complicated functions. It starts as a net without hidden units, but as soon as it "perceives" that it can no longer improve its performance within the limit of current net topology, it automatically recruits a new hidden unit. This process is repeated until a satisfactory degree of function approximation is achieved. Learning of the shifting reference of pronouns can be regarded as a special kind of nonlinear function learning, where the function to be learned stipulates me if the speaker and the referent agree, and you if the addressee and the referent agree. We investigated how this function is approximated by the CC network using graphic techniques.
展开▼