LLMs don’t know anything: reply to Yildirim and Paul

Published on September 20, 2024

In their recent Opinion in TiCS [1], Yildirim and Paul propose that large language models (LLMs) have ‘instrumental knowledge’ and possibly the kind of ‘worldly’ knowledge that humans do. They suggest that the production of appropriate outputs by LMMs is evidence that LLMs infer ‘task structure’ that may reflect ‘causal abstractions of… entities and processes in the real world’ [1]. While we agree that LLMs are impressive and potentially interesting for cognitive science, we resist this project on two grounds.

Read Full Article (External Site)