Bayesian associative learning
Gershman, S., J. (2015). A unifying probabilistic view of associative learning. PLoS Computational Biology, 11, e1004567.
You should read the first ~half of the paper until the start of the temporal difference section (bottom of page 7). There’s a great youtube talk-form of this paper from Sam Gershman that is really helpful for understanding the paper. The part for the first half of the paper ends at ~35 minutes. You can watch this instead of or in addition to the reading if you like.
Your goal should be to understand why this model is different from the Rescorla-Wagner model that you learned about the start of the semester, and be able to talk about their relative merits, and how they relate to higher-level ideas about frameworks for thinking about learning.
Gershman, S., J., & Niv, Y. (2012). Exploring a latent cause theory of classical conditioning. Learning & Behavior, 40, 255—268.
- You should understand what motivated this model (relative to classical conditioning models), the phenomena described, and why the model accounts for them.
The primary goal this week is to talk about how this framing of the learning problem is different from prior models of classical conditioning, what it buys us, and how compelling you find this account.