All symbol processing tasks can be viewed as instances of symbol-to-symbol transduction (SST). SST generalizes many familiar symbolic problem classes including language identification and sequence generation. One method of performing SST is via dynamical recurrent networks employed as symbol-to-symbol transducers. We construct these transducers by adding symbol-to-vector preprocessing and vector-to-symbol postprocessing to the vector-to-vector mapping provided by neural networks. This chapter surveys the capabilities and limitations of these mechanisms from both top-down (task dependent) and bottom up (implementation dependent) forces.
展开▼