Controlling the error probabilities of model selection information criteria using bootstrapping

Michael Cullan, Scott Lidgard, Beckett Sterner

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The Akaike Information Criterion (AIC) and related information criteria are powerful and increasingly popular tools for comparing multiple, non-nested models without the specification of a null model. However, existing procedures for information-theoretic model selection do not provide explicit and uniform control over error rates for the choice between models, a key feature of classical hypothesis testing. We show how to extend notions of Type-I and Type-II error to more than two models without requiring a null. We then present the Error Control for Information Criteria (ECIC) method, a bootstrap approach to controlling Type-I error using Difference of Goodness of Fit (DGOF) distributions. We apply ECIC to empirical and simulated data in time series and regression contexts to illustrate its value for parametric Neyman–Pearson classification. An R package implementing the bootstrap method is publicly available.

Original languageEnglish (US)
Pages (from-to)2565-2581
Number of pages17
JournalJournal of Applied Statistics
Volume47
Issue number13-15
DOIs
StatePublished - Nov 17 2020

Keywords

  • Error statistics
  • Neyman–Pearson classification
  • bootstrap
  • hypothesis testing
  • non-nested models

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Controlling the error probabilities of model selection information criteria using bootstrapping'. Together they form a unique fingerprint.

Cite this