Department of Psychological and Brain Sciences
For decades, behavioral scientists have used the matching law to quantify how animals distribute their choices between multiple options in response to reinforcement they receive. More recently, many reinforcement learning (RL) models have been developed to explain choice by integrating reward feedback over time. Despite reasonable success of RL models in capturing choice on a trial-by-trial basis, these models cannot capture variability in matching behavior. To address this, we developed metrics based on information theory and applied them to choice data from dynamic learning tasks in mice and monkeys. We found that a single entropy-based metric can explain 50% and 41% of variance in matching in mice and monkeys, respectively. We then used limitations of existing RL models in capturing entropy-based metrics to construct more accurate models of choice. Together, our entropy-based metrics provide a model-free tool to predict adaptive choice behavior and reveal underlying neural mechanisms.
Trepka, E., Spitmaan, M., Bari, B.A. et al. Entropy-based metrics for predicting choice behavior based on local response to reward. Nat Commun 12, 6567 (2021). https://doi.org/10.1038/s41467-021-26784-w
Dartmouth Digital Commons Citation
Trepka, Ethan; Spitmaan, Mehran; Bari, Bilal A.; Costa, Vincent D.; Cohen, Jeremiah Y.; and Soltani, Alireza, "Entropy-based metrics for predicting choice behavior based on local response to reward" (2021). Dartmouth Scholarship. 4116.