Abstract
The Akaike Information Criterion (AIC) and related information criteria are powerful and increasingly popular tools for comparing multiple, non-nested models without the specification of a null model. However, existing procedures for information-theoretic model selection do not provide explicit and uniform control over error rates for the choice between models, a key feature of classical hypothesis testing. We show how to extend notions of Type-I and Type-II error to more than two models without requiring a null. We then present the Error Control for Information Criteria (ECIC) method, a bootstrap approach to controlling Type-I error using Difference of Goodness of Fit (DGOF) distributions. We apply ECIC to empirical and simulated data in time series and regression contexts to illustrate its value for parametric Neyman–Pearson classification. An R package implementing the bootstrap method is publicly available.
Original language | English (US) |
---|---|
Pages (from-to) | 2565-2581 |
Number of pages | 17 |
Journal | Journal of Applied Statistics |
Volume | 47 |
Issue number | 13-15 |
DOIs | |
State | Published - Nov 17 2020 |
Keywords
- Error statistics
- Neyman–Pearson classification
- bootstrap
- hypothesis testing
- non-nested models
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty