This work moves from the general hypothesis that action influences knowledge formation, and that the way we organise our knowledge reflects action patterns (7]. The traditional assumption in the categorisation literature is that categories are organised on the basis of perceptual similarity among their members. But much evidence shows that, when we need to perform an action, we can group objects which are perceptually dissimilar. Many studies have shown that we are able to flexibly organise and create new categories of objects on the basis of more or less contingent goals [2,3]. We present some simulations in which neural networks are trained using a genetic algorithm to move a 2-segment arm and press one of two buttons in response to each of 4 stimuli. The neural networks are required to group the stimuli, by pressing the same button, in 2 categories which, depending on the particular task (which is encoded in a set of additional input units), may be formed by perceptually very similar, moderately similar, or different objects. We find that task information overrides perceptual information, that is, the internal representations of neural networks tend to reflect the current task and not the perceptual similarity between the objects. However, neural networks tend to form action-based categories more easily (e.g. in fewer generations) when perception and action are congruent (perceptually similar objects must be responded to by pressing the same button) than when they are not congruent (perceptually similar objects must be responded to by pressing different buttons). We also find that at hidden layers nearer the sensory input, where task information still has not arrived, internal representations continue to reflect perceptual information.
展开▼