|
In parallel computation we often need an algorithm for dividing one computationally expensive job into a fixed number, say N, of subjobs, which can be processed in parallel (with reasonable overhead due to additional communication). In practice it is often easier to repeatedly bisect jobs, i.e., split one job into exactly two subjobs, than to generate N subjobs at once. In order to balance the load among the N machines, we want to minimize the size of the largest subjob (according to some measure, like cpu-time or memory usage).
In this paper we study a recently presented load balancing algorithm, called Heaviest First Algorithm (Algorithm HF), that is applicable to all classes of problems for which bisections can be computed efficiently. This algorithm implements a very natural strategy: During N-1 iterations we always bisect the largest subproblem generated so far.
The maximum load produced by this algorithm has previously been shown to differ from the optimum only by a constant factor even in the worst-case. In this paper we consider the average-case, assuming a natural and rather pessimistic random distribution for the quality of the bisections. Under this model the heaviest load generated by Algorithm HF is proved to be only twice as large as the optimum with high probability. Furthermore, our analysis suggests a simpler version of Algorithm HF which can easily be parallelized.