Unveiling the Flexibility of Category Learning: A New Model

Published on April 12, 2022

Imagine you’re learning different categories of fruit, but the emphasis on features changes depending on the order in which you study them. That’s where the Sequential Attention Theory Model (SAT-M) comes in! Unlike other models, SAT-M not only considers the category assignment of an item but also how it relates to its neighboring items. By studying data from experiments with different trial sequences (interleaved vs. blocked), researchers found that SAT-M accurately captures the effect of local context and predicts which training method will result in better testing performance. In fact, other models like ALCOVE and SUSTAIN, as well as a version of SAT-M without locally adaptive encoding, didn’t fit the results as well. And guess what? SAT-M even provided insights into learners’ behaviors during training. It’s fascinating to see how our brains adapt to changing contexts when learning new things! Curious to dive deeper? Check out the research article linked above!

Abstract
Although current exemplar models of category learning are flexible and can capture how different features are emphasized for different categories, they still lack the flexibility to adapt to local changes in category learning, such as the effect of different sequences of study. In this paper, we introduce a new model of category learning, the Sequential Attention Theory Model (SAT-M), in which the encoding of each presented item is influenced not only by its category assignment (global context) as in other exemplar models, but also by how its properties relate to the properties of temporally neighboring items (local context). By fitting SAT-M to data from experiments comparing category learning with different sequences of trials (interleaved vs. blocked), we demonstrate that SAT-M captures the effect of local context and predicts when interleaved or blocked training will result in better testing performance across three different studies. Comparatively, ALCOVE, SUSTAIN, and a version of SAT-M without locally adaptive encoding provided poor fits to the results. Moreover, we evaluated the direct prediction of the model that different sequences of training change what learners encode and determined that the best-fit encoding parameter values match learners’ looking times during training.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>