Abstract
Complex sequences are ubiquitous in human mental life, structuring representations within many different cognitive domains—natural language, music, mathematics, and logic, to name a few. However, the representational and computational machinery used to learn abstract grammars and process complex sequences is unknown. Here, we used an artificial grammar learning task to study how adults abstract center-embedded and cross-serial grammars that generalize beyond the level of embedding of the training sequences. We tested untrained generalizations to longer sequence lengths and used error patterns, item-to-item response times, and a Bayesian mixture model to test two possible memory architectures that might underlie the sequence representations of each grammar: stacks and queues. We find that adults learned both grammars, that the cross-serial grammar was easier to learn and produce than the matched center-embedded grammar, and that item-to-item touch times during sequence generation differed systematically between the two types of sequences. Contrary to widely held assumptions, we find no evidence that a stack architecture is used to generate center-embedded sequences in an indexed AnBn artificial grammar. Instead, the data and modeling converged on the conclusion that both center-embedded and cross-serial sequences are generated using a queue memory architecture. In this study, participants stored items in a first-in-first-out memory architecture and then accessed them via an iterative search over the stored list to generate the matched base pairs of center-embedded or cross-serial sequences.

Read Full Article (External Site)