Abstract
This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate SSVS formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytical simplification. Hyperparameter settings which base selection on practical significance, and the implications of using mixtures with point priors are discussed. Computational methods for posterior evaluation and exploration are considered. Rapid updating methods are seen to provide feasible methods for exhaustive evaluation using Gray Code sequencing in moderately sized problems, and fast Markov Chain Monte Carlo exploration in large problems. Estimation of normalization constants is seen to provide improved posterior estimates of individual model probabilities and the total visited probability. Various procedures are illustrated on simulated sample problems and on a real problem concerning the construction of financial index tracking portfolios.
Original language | English (US) |
---|---|
Pages (from-to) | 339-373 |
Number of pages | 35 |
Journal | Statistica Sinica |
Volume | 7 |
Issue number | 2 |
State | Published - Apr 1997 |
Externally published | Yes |
Keywords
- Conjugate prior
- Gibbs sampling
- Gray Code
- Hierarchical models
- Markov chain Monte Carlo
- Metropolis-Hastings algorithms
- Normal mixtures
- Normalization constant
- Regression
- Simulation
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty