In a recent opinion in TiCS [1], we asked: ‘What do large language models (LLMs) know?’. We answered by granting LLMs instrumental knowledge: that is, knowledge gained through using the instrument of next-word generation. We then explored how this type of instrumental knowledge could be related to the more ordinary kind of worldly knowledge exhibited […]
Published on September 20, 2024
In their recent Opinion in TiCS [1], Yildirim and Paul propose that large language models (LLMs) have ‘instrumental knowledge’ and possibly the kind of ‘worldly’ knowledge that humans do. They suggest that the production of appropriate outputs by LMMs is evidence that LLMs infer ‘task structure’ that may reflect ‘causal abstractions of… entities and processes […]
Published on September 20, 2024
Canonical cases of learning involve novel observations external to the mind, but learning can also occur through mental processes such as explaining to oneself, mental simulation, analogical comparison, and reasoning. Recent advances in artificial intelligence (AI) reveal that such learning is not restricted to human minds: artificial minds can also self-correct and arrive at new […]
Published on September 19, 2024