An artificial neural network computation acceleration apparatus for distributed processing includes an external main memory for storing input data and synapse weights for input neurons; an internal buffer memory for storing a synapse weight and input data required for each cycle constituting the artificial neural network computation; a DMA module for directly transmitting/receiving data to/from the external main memory and the internal buffer memory; and a general-use communication media block capable of transmitting/receiving the input data and the synapse weights for the input neurons and a result of the computation performed by the neural network computation device to/from another acceleration apparatus physically connected regardless of the type of an integrated circuit.
展开▼