Neural networks simulations have always been a complex computational chal-udlenge because of the requirements of large amount of computational andudmemory resources. Due to the nature of the problem, a high performanceudcomputing approach becomes vital, because the dynamics often involves theudupdate of a large network for a large number of time steps. Moreover, theudparameter space can be fairly large. An advanced optimization for the singleudtime step is therefore necessary, as well as a strategy to explore the parameterudspace in an automatic fashion.udThis work rst examines the purely serial original code, identifying itsudbottlenecks and ine cient design choices. After that, several optimizationsudstrategies are presented and discussed, exploiting vectorization, e cient mem-udory access and cache usage. The strategies are presented together with anudextensive set of the benchmarks and a detailed discussion of all the issuesudencountered.udThe nal part of the work is the design of a high throughput approachudto the paramenter sweep, necessary to explore the behaviour of the network.udThis is implemented by means of a task manager that takes care of runningudsimulations from a batch of prede ned runs in an automatic way and collectsudtheir results. A detailed performance analysis of the task manager is reported.udThe results of the work show a consistent speed up for the single-runudcase, and a massive productivity improvement thanks to the task-manager.udMoreover, the code base is now reorganized to favor extensibility and codeudreuse, allowing the application of several of the present strategies to otherudproblems as well.
展开▼