At the same time, there are real risks in treating simulated minds as equivalent to human ones. Models reflect the data, assumptions, and goals of their creators. If researchers lean on AI surrogates without careful calibration, they may mistake model artifacts for universal principles. That outcome would narrow the field by reinforcing familiar patterns and excluding perspectives that matter for human potential and inclusion.

This article pushes beyond hype to ask how AI surrogates might be used responsibly to expand our reach. It offers a roadmap for using simulations to reveal blind spots, not to confirm them, and invites readers to rethink how tools shape what we count as knowledge. If you care about whether cognitive science will serve a diverse public and foster fairer, more robust theories of mind, the full piece is worth exploring.

Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new knowledge about human cognition and behavior. This vision of ‘AI Surrogates’ promises to enhance research in cognitive science by addressing longstanding challenges to the generalizability of human subjects research. AI Surrogates are envisioned as expanding the diversity of populations and contexts that we can feasibly study with the tools of cognitive science. Here, we caution that investing in AI Surrogates risks entrenching research practices that narrow the scope of cognitive science research, perpetuating ‘illusions of generalizability’ where we believe our findings are more generalizable than they actually are. Taking the vision of AI Surrogates seriously helps illuminate a path toward a more inclusive cognitive science.

Read Full Article (External Site)