The Performance Gain Achieved By Parallelization Is Usually Analyzed By Computing Speedup

1657 Words Jul 26th, 2015 7 Pages
The performance gain achieved by parallelization is usually analyzed by computing Speedup:
egin{equation}
S(N,p)=frac{T_{serial}(N)}{T(N,p)} label{eqn12} end{equation} where (T(N)_{serial}) is the running time of the algorithm in serial mode, (T(n, p)) is the runtime of the parallel algorithm using (p) equivalent processing elements and (N) is the size of the problem cite{Grama-2003}.
Typically, (S(p)p) is called supra-linear speedup.

If only shared memory parallelization is used to accelerate the processing, each core can be considered as a processing element and Speedup is a straightforward parameter for the performance analysis.
Nevertheless, for hybrid parallel processing, it is not so easy to identify the processing element because, for most applications, the processing capability of a set of cores belonging to different nodes is lower than that of equal number of cores belonging to a single node.
Moreover, the implementation of distributed parallelization generally introduce overheads that increases the computational cost in relation to the serial version.
In this way, two concept of processing elements were considered here to compute the Speedup and Efficiency: a single core as processing element, as usually used in shared memory parallelization studies, and a whole node of a cluster as processing element, which enables to highlight the effects of distributed parallelization.

The Speedup (S(p)) and Efficiency (E(p)) were defined by the following equations,…

Related Documents