This paper will present the architecture of an Field Programmable Neural Array (FPNA) for AI applications at the tactical computing edge. This platform combines domain specific accelerators for AI with a reconfigurable interconnect to permit any Deep Neural Network to be mapped into the FPNA. The FPNA includes domain specific accelerators that perform inference tasks with higher computing efficiency than CPUs and GPUs, approaching that of ASICs designed specifically for AI applications, and a reconfigurable interconnect providing the flexibility and connectivity of an FPGA.
展开▼