The devilish details affecting TDRL models in dopamine research

Published on February 27, 2025

Over recent decades, temporal difference reinforcement learning (TDRL) models have successfully explained much dopamine (DA) activity. This success has invited heightened scrutiny of late, with many studies challenging the validity of TDRL models of DA function. Yet, when evaluating the validity of these models, the devil is truly in the details. TDRL is a broad class of algorithms sharing core ideas but differing greatly in implementation and predictions. Thus, it is important to identify the defining aspects of the TDRL framework being tested and to use state spaces and model architectures that capture the known complexity of the behavioral representations and neural systems involved. Here, we discuss several examples that illustrate the importance of these considerations.

Read Full Article (External Site)