Thinking about syntax as dependency relations makes several puzzles easier to approach. It offers a straightforward account of why certain word orders recur across unrelated languages and supplies testable predictions about which sentence patterns should be harder or easier for readers and listeners. The claim also connects to modern machine learning: large language models may reach apparent syntactic skill by picking up those same dependency patterns from examples, without needing hand-coded hierarchical templates.

If you care about human potential and equitable access to language learning, this perspective shifts the terrain for educators and technologists. It invites a set of practical questions: how can teaching and assessment spotlight dependency patterns, which learners find them intuitive, and how might models trained on such patterns either help or mislead? Follow the full article to see how this streamlined theory links data from psycholinguistics, typology, and AI, and to explore what it could mean for making language learning and language technology more inclusive.
The syntax of human languages has long been argued to be complex and even unlearnable from the input alone. However, the success of large language models (LLMs) has challenged this idea. I argue for a simple view of syntax, where the syntax of a language is just the set of dependency rules, with no phrase structure or transformation rules—constructs central to Chomsky’s transformational grammar. This approach accounts for diverse phenomena in human language processing and explains crosslinguistic word order universals. Moreover, it better explains human data for cases that differentiate these accounts and eliminates the syntax learnability problem. I speculate that LLMs, similar to children, learn the dependency grammar from linguistic patterns, leading to their impressive syntactic competence.