Response surface design evaluation and comparison

Christine M. Anderson-Cook, Connie M. Borror, Douglas Montgomery

Research output: Contribution to journalComment/debatepeer-review

123 Scopus citations


Designing an experiment to fit a response surface model typically involves selecting among several candidate designs. There are often many competing criteria that could be considered in selecting the design, and practitioners are typically forced to make trade-offs between these objectives when choosing the final design. Traditional alphabetic optimality criteria are often used in evaluating and comparing competing designs. These optimality criteria are single-number summaries for quality properties of the design such as the precision with which the model parameters are estimated or the uncertainty associated with prediction. Other important considerations include the robustness of the design to model misspecification and potential problems arising from spurious or missing data. Several qualitative and quantitative properties of good response surface designs are discussed, and some of their important trade-offs are considered. Graphical methods for evaluating design performance for several important response surface problems are discussed and we show how these techniques can be used to compare competing designs. These graphical methods are generally superior to the simplistic summaries of alphabetic optimality criteria. Several special cases are considered, including robust parameter designs, split-plot designs, mixture experiment designs, and designs for generalized linear models.

Original languageEnglish (US)
Pages (from-to)629-641
Number of pages13
JournalJournal of Statistical Planning and Inference
Issue number2
StatePublished - Feb 1 2009


  • Design optimality
  • Fraction of design space plots
  • Graphical methods
  • Variance dispersion graphs

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty
  • Applied Mathematics


Dive into the research topics of 'Response surface design evaluation and comparison'. Together they form a unique fingerprint.

Cite this