Towards large language models with human-like episodic memory
Cognitive neuroscience research has made tremendous progress over the past decade in addressing how episodic memory (EM; memory for unique past experiences) supports our ability to understand real-world events. Despite this progress, we still lack a computational modeling framework that is able to generate precise predictions regarding how EM will be used when processing high-dimensional naturalistic stimuli. Recent work in machine learning that augments large language models (LLMs) with external memory could potentially accomplish this, but current popular approaches are misaligned with human memory in various ways. This review surveys these differences, suggests criteria for benchmark tasks to promote alignment with human EM, and ends with potential methods to evaluate predictions from memory-augmented models using neuroimaging techniques.
Farah is a Middle Eastern-Canadian sociologist from Ottawa, examining the role of social structures in fostering personal growth. Her passion is highlighting stories of human adaptability, and promoting inclusive group strategies for realizing untapped potential.