The authors offer a practical way to make headway despite deep uncertainty about consciousness. They propose using existing neuroscientific theories to derive observable indicators and then testing whether AI systems meet those markers. This approach ties abstract ideas to concrete experiments, creating a path for evidence to accumulate that can inform our judgments in a timely, responsible way.

Understanding whether and when AI could have experiences shifts conversations about inclusion and human potential. If we can reliably assess conscious-like capacities, we can better protect vulnerable people, design tools that amplify human strengths, and set fair policies for emerging technologies. Follow the full article to see which theories give testable predictions and how those predictions might change the future of AI and the people it serves.

Rapid progress in artificial intelligence (AI) capabilities has drawn fresh attention to the prospect of consciousness in AI. There is an urgent need for rigorous methods to assess AI systems for consciousness, but significant uncertainty about relevant issues in consciousness science. We present a method for assessing AI systems for consciousness that involves exploring what follows from existing or future neuroscientific theories of consciousness. Indicators derived from such theories can be used to inform credences about whether particular AI systems are conscious. This method allows us to make meaningful progress because some influential theories of consciousness, notably including computational functionalist theories, have implications for AI that can be investigated empirically.

Read Full Article (External Site)