This divergence signals a turning point for vision science. To understand perception, researchers need models that reflect the constraints and goals of biological systems: limited energy, developmental learning, and the need to act in noisy, changeable environments. Building models around these pressures means studying organisms and designing algorithms with biological realism instead of optimizing only for internet-scale accuracy. That shift will demand different data, experiments, and a willingness to test models against brain activity and behavior at multiple scales.

For anyone curious about human potential and inclusive science, the lesson is practical. If neuroscience borrows wholesale from the latest engineering feats, it may miss mechanisms central to development, learning differences, and rehabilitation after injury. The article linked below digs into evidence and examples that illustrate where artificial and biological vision part ways, and it asks what a biologically grounded roadmap would look like.
Deep neural networks (DNNs) once showed increasing alignment with primate perception as they improved on vision benchmarks, raising hopes that advances in artificial intelligence (AI) would naturally yield better models of biological vision. However, we present accumulating evidence that this alignment is now plateauing – and in some cases worsening – as DNNs scale to human or even superhuman accuracy. This divergence between artificial and biological perception may reflect the acquisition of visual strategies distinct from those of primates, and these findings challenge the view that advances in AI will naturally translate to progress in neuroscience. We argue that vision science must chart its own course, developing algorithms grounded in biological visual systems rather than optimizing for internet data.