This manuscript describes how learning curves can be used to provide a strong test for computational models of cognitive processes. As an example, we show how this method can be used to evaluate the Exemplar-Based Random-Walk model of categorization (EBRW; Nosofsky & Palmeri, 1997a). EBRW is an extension of the Generalized Context Model (GCM; Nosofsky, 1984, 1986). It predicts that the mean response times (RTs) follow a power function. It can be shown analytically, however, that the learning rate (i.e., the curvature) predicted by the model can only be equal to 1, a value rarely observed in empirical data analyses. We also explored an extended version of EBRW including background noise elements (. Nosofsky & Alfonso-Reese, 1999) and identified conditions under which this model can predict curvatures different from 1. The limitation of these models to predict a wide variety of curvatures as observed in human data can be resolved by a simple extension to EBRW in which the original exponential distribution of retrieval times is replaced by a Weibull distribution. Additional predictions regarding learning curves are discussed.

Additional Metadata
Keywords Categorization models, Exemplar-based random-walk model, Learning curves, Power curve
Persistent URL dx.doi.org/10.1016/j.jmp.2013.05.003
Journal Journal of Mathematical Psychology
Citation
Cousineau, D. (Denis), Lacroix, G, Giguère, G. (Gyslain), & Hélie, S. (Sébastien). (2013). Learning curves as strong evidence for testing models: The case of EBRW. Journal of Mathematical Psychology, 57(1-2), 107–116. doi:10.1016/j.jmp.2013.05.003