×

Parallel speed-up of Monte Carlo methods for global optimization. (English) Zbl 0798.90124

Summary: We introduce the notion of expected hitting time to a goal as a measure of the convergence rate of a Monte Carlo optimization method. The techniques developed apply to simulated annealing, genetic algorithms, and other stochastic search schemes. The expected hitting time can itself be calculated from the more fundamental complementary hitting time distribution (CHTD) which completely characterizes a Monte Carlo method. The CHTD is asymptotically a geometric series, \((1/s)/(1- \lambda)\), characterized by two parameters, \(s\), \(\lambda\), related to the search process in a simple way. The main utility of the CHTD is in comparing Monte Carlo algorithms. In particular, we show that independent, identical Monte Carlo algorithms run in parallel, IIP parallelism, and exhibit superlinear speedup. We give conditions under which this occurs and note that equally likely search is linearly sped up. Further, we observe that a serial Monte Carlo search can have an infinite expected hitting time, but the same algorithm when parallelized can have a finite expected hitting time. One consequence of the observed superlinear speedup is an improved uniprocessor algorithm by the technique of in-code parallelism.

MSC:

90C30 Nonlinear programming
65Y05 Parallel numerical computation