Previous work has shown that a simple recurrent neural model called RECONTRA is able to successfully approach simple text-to-text Machine Translation tasks in limited semantic domains. In order to deal with tasks of medium or large vocabularies, distributed representations of the lexicons are required in this translator. This paper shows a method for automatically extracting these distributed representations from perceptrons with output context.
展开▼