A parallel implementation of the simplex function minimization routine

Donghoon Lee, Matthew Wiswall

Research output: Contribution to journalArticlepeer-review

42 Scopus citations


This paper generalizes the widely used Nelder and Mead (Comput J 7:308-313, 1965) simplex algorithm to parallel processors. Unlike most previous parallelization methods, which are based on parallelizing the tasks required to compute a specific objective function given a vector of parameters, our parallel simplex algorithm uses parallelization at the parameter level. Our parallel simplex algorithm assigns to each processor a separate vector of parameters corresponding to a point on a simplex. The processors then conduct the simplex search steps for an improved point, communicate the results, and a new simplex is formed. The advantage of this method is that our algorithm is generic and can be applied, without re-writing computer code, to any optimization problem which the non-parallel Nelder-Mead is applicable. The method is also easily scalable to any degree of parallelization up to the number of parameters. In a series of Monte Carlo experiments, we show that this parallel simplex method yields computational savings in some experiments up to three times the number of processors.

Original languageEnglish (US)
Pages (from-to)171-187
Number of pages17
JournalComputational Economics
Issue number2
StatePublished - Sep 1 2007


  • Optimization algorithms
  • Parallel computing

ASJC Scopus subject areas

  • Economics, Econometrics and Finance (miscellaneous)
  • Computer Science Applications


Dive into the research topics of 'A parallel implementation of the simplex function minimization routine'. Together they form a unique fingerprint.

Cite this