With the speed of Individual cores no longer Increasing at the rate we came to love over the past decades, programmers have to look for other ways to increase the speed of our ever-more-complicated applications. The functionality provided by the CPU manufacturers is an increased number of execution units, or CPU cores. To use these extra, cores, programs must be parallelized. Multiple paths of execution have to work together.to complete the tasks the program has to perform, and as much of that work as possible has to happen concurrently. Only then is it possible to speed up the program (that is, reduce the total runtime), Amdahl's Law expresses this as: Here P is the fraction of the program that can be parallelized, and S is the number of execution units. This is the theory. Making it a reality is another issue. Simply writing a normal program by itself is a problem, as can be seen in the relentless stream of bug fixes available for programs. Trying to split a program into multiple pieces that can be executed in parallel adds a whole dimension of additional problems: 1. Unless the program consists of multiple independent pieces from the onset and should have been written as separate programs in the first place, the individual pieces have to collaborate. This usually takes on the form of sharing data in memory or on secondary storage.
展开▼