Arguments against strong modularity force us to reexamine assumptions about where and how coherence arises. Laboratory measures, brain stimulation and neural-network models each offer pieces of the story, but none captures the full everyday task of parsing cluttered, changing scenes. Reconciling diverse findings requires attention to timescales, connectivity and the tasks animals face in natural environments. Those factors shape whether neural activity appears modular or broadly distributed and whether binding seems like a separate operation or an emergent property.

If you care about human potential, growth and inclusivity, the stakes go beyond academic taxonomy. How we understand feature integration affects rehabilitation after brain injury, the design of assistive technology and the fairness of vision systems that serve diverse users. Follow the article to see how modern evidence reshapes old puzzles and to consider how those shifts might change tools and therapies that support people with different visual and cognitive profiles.
In their recent article [1], Scholte and de Haan argue, contrary to the classical view, that the visual cortex is not organized into separate modules to process individual features (e.g., color in V4 and motion in V5/MT). In the absence of such a modular organization, they argue, the problem of binding separate features together in coherent object representations (the binding problem) does not arise. A recent commentary by Roelfsema and Serre [2] has already argued that the case against modularity is not as strong as claimed by Scholte and de Haan and highlighted empirical evidence for binding mechanisms both in the visual cortex and artificial neural networks.