Language and complex thinking often rely on our ability to understand nested structures. Whether we’re parsing a complicated sentence or following an intricate musical progression, our brains must somehow track multiple layers of information simultaneously. This cognitive challenge has long fascinated researchers seeking to understand how human minds organize and process abstract sequences.

Recent cognitive science research offers a surprising window into these mental mechanisms. By designing carefully constructed artificial grammar experiments, scientists can probe the hidden computational strategies our brains use when confronting complex patterns. These investigations reveal that our mental processing might work quite differently from what researchers previously assumed.

Understanding how we learn and generate complex sequences matters far beyond academic curiosity. These insights could transform approaches to education, communication, and even artificial intelligence design. By mapping the subtle architectures of human learning, researchers illuminate the remarkable flexibility of our cognitive systems—revealing how we transform abstract rules into meaningful understanding. The brain’s capacity to generate intricate sequences emerges as a profound testament to human cognitive adaptability.

Abstract
Complex sequences are ubiquitous in human mental life, structuring representations within many different cognitive domains—natural language, music, mathematics, and logic, to name a few. However, the representational and computational machinery used to learn abstract grammars and process complex sequences is unknown. Here, we used an artificial grammar learning task to study how adults abstract center-embedded and cross-serial grammars that generalize beyond the level of embedding of the training sequences. We tested untrained generalizations to longer sequence lengths and used error patterns, item-to-item response times, and a Bayesian mixture model to test two possible memory architectures that might underlie the sequence representations of each grammar: stacks and queues. We find that adults learned both grammars, that the cross-serial grammar was easier to learn and produce than the matched center-embedded grammar, and that item-to-item touch times during sequence generation differed systematically between the two types of sequences. Contrary to widely held assumptions, we find no evidence that a stack architecture is used to generate center-embedded sequences in an indexed AnBn artificial grammar. Instead, the data and modeling converged on the conclusion that both center-embedded and cross-serial sequences are generated using a queue memory architecture. In this study, participants stored items in a first-in-first-out memory architecture and then accessed them via an iterative search over the stored list to generate the matched base pairs of center-embedded or cross-serial sequences.

Read Full Article (External Site)