Algorithms are often parallelized based on data dependenceanalysis manually or by means of parallel compilers. Some vector/matrixcomputations such as the matrix-vector products with simple datadependence structures (data parallelism) can be easily parallelized. Forproblems with more complicated data dependence structures,parallelization is less straightforward. The data dependence graph is apowerful means for designing and analyzing parallel algorithm. Howeverfor sparse matrix computations, parallelization based on solelyexploiting the existing parallelism in an algorithm does not always givesatisfactory results. For example, the conventional Gaussian eliminationalgorithm for the solution of a tri-diagonal system is inherentsequential, so algorithms specially for parallel computation has to bedesigned. After briefly reviewing different parallelization approaches,a powerful graph formalism for designing parallel algorithms isintroduced. This formalism will be discussed using a tri-diagonal systemas an example. Its application to general matrix computations is alsodiscussed and its power in designing parallel algorithms beyond theability of data dependence analysis is shown
展开▼