Abstract: The neural network approach to computation is foundedon the application of simple mechanisms, operatinguniformly in massively parallel structures. Inaddition, the notion of `learning' or `training' playsa strong role; the hope is that, given sample solutionsto a difficult problem, some general mechanism canconstruct a general rule which produces solutions. Thispaper describes an application of one variant on thisparadigm, the theory of generalized radial basisfunctions (GRBF), also called HyperBF, to the problemof object recognition, which has often been regarded asa `symbolic' domain, ill-suited to the application ofneural networks. The authors begin with a briefreviewing of the view of learning which leads to theHyperBF paradigm, and continue by describing how theproblem of recognizing views of a particular object maybe cast into the neural network paradigm, and showingthe application of HyperBF to the problem, withexamples. Problems for further work are discussed. TheHyperBF scheme for learning models emerges fromfunction approximation theory (specifically, Tikhonov'sregularization theory). In its full generality, itincludes a number of other function approximationschemes; it also has a simple representation as a formof computational network (albeit a slightly unusualform). A feature of particular interest is that thenetwork is trained only on 2-D views of the objects tobe recognized; no explicit 3-D models are everconstructed, nor are they ever required. A discussionof the plain radial-basis-functions method ofapproximation is followed by discussion of variousgeneralizations.!
展开▼