Explaining Human Causal Learning using a Dynamic Probabilistic Model
Advisor: Allan L. Yuille
Recent psychological experiments (Beckers, De Houwer, Pineño, & Miller, 2005; Beckers, Miller, De Houwer, & Urushihara, 2006) have revealed that pre- and/or post- training with unrelated cues can significantly modulate the performance of humans in causal learning tasks and rats in the standard Pavlovian conditioning paradigm. This modulation can be large enough that classical conditioning phenomena such as forward and backward blocking can vanish, contrary to expectations from traditional psychological theories of associative learning. In this work we present a novel Bayesian theory of sequential causal learning that explains these experimental results. In addition, we extend our theory to provide an account for the highlighting effect (Daw, Courville, & Dayan, (2007); Kruschke, 2006, 2001) and then generalize our formalism to model the case of multiple cues and outcomes in the learning framework. Our Bayesian theory assumes that humans and rats have available several alternative generative models (linear-sum, MAX, noisy-MAX, etc.) for causal learning. By exploring the model space, we narrow the plausible models to two possibilities (linear-sum, noisy-MAX) where the cues and outcomes are both continuous variables. We implement the models xviii using two approaches: (1) discretize the cue and outcome variables (making sure the discretization is dense enough) and (2) use the particle filter algorithm as an approximation to statistical inference. Our results show that model selection and model averaging are able to capture the effects of pre- and post- training respectively. We conjecture that the choice between model selection and model averaging is determined by when the information for making this choice is available. For the experiments with pretraining, the information is available before the learning trails (by the pretraining), therefore, humans/rats know which model to use. For posttraining, the information is only made available after the learning trials, which requires humans/rats to make retrospective evaluations. Lastly, our generalization to multiple cues and outcomes is tested within the Highlighting paradigm and we show that this more robust approach, provides an excellent account of experimental findings.