Abstract
Background: Reinforcement learning models provide excellent descriptions of learning in multiple species across a variety of tasks. Many researchers are interested in relating parameters of reinforcement learning models to neural measures, psychological variables or experimental manipulations. We demonstrate that parameter identification is difficult because a range of parameter values provide approximately equal quality fits to data. This identification problem has a large impact on power: we show that a researcher who wants to detect a medium sized correlation (r =.3) with 80% power between a variable and learning rate must collect 60% more subjects than specified by a typical power analysis in order to account for the noise introduced by model fitting. New method: We derive a Bayesian optimal model fitting technique that takes advantage of information contained in choices and reaction times to constrain parameter estimates. Results: We show using simulation and empirical data that this method substantially improves the ability to recover learning rates. Comparison with existing methods: We compare this method against the use of Bayesian priors. We show in simulations that the combined use of Bayesian priors and reaction times confers the highest parameter identifiability. However, in real data where the priors may have been misspecified, the use of Bayesian priors interferes with the ability of reaction time data to improve parameter identifiability. Conclusions: We present a simple technique that takes advantage of readily available data to substantially improve the quality of inferences that can be drawn from parameters of reinforcement learning models.
Original language | English (US) |
---|---|
Pages (from-to) | 37-44 |
Number of pages | 8 |
Journal | Journal of Neuroscience Methods |
Volume | 317 |
DOIs | |
State | Published - Apr 1 2019 |
Keywords
- Delay discounting
- Intertemporal choice
- Parameter estimation
- Power
- Q-learning
- Reproducibility
- Striatum
ASJC Scopus subject areas
- Neuroscience(all)