The team recorded many short conversations while changing whether partners could see each other’s hands and whether topics were simple or complex. A substantial share of gestures could not be categorized by traditional labels; many of those uncodable movements served interactive purposes. When visibility was blocked, people produced fewer interactive gestures during easy topics, while talk about complex subjects kept those gestures steady. Those patterns point toward an automatic, embodied tendency to support understanding, not only an intentional effort to show or draw attention.

For readers curious about human potential, this research suggests our bodies carry an underappreciated role in thinking and learning together. If gestures help stabilize meaning and persist even when unseen, they are part of how groups build shared understanding and include others in conversation. Follow the article to see what this implies for classrooms, remote meetings, and designs that aim to make communication more inclusive.
Abstract
Gestures are often categorized into types: iconics, metaphorics, and pantomimes (having representational relationships with spoken semantics), deictics (i.e., pointing), emblems (having their own conventional meaning), and beats (temporally coinciding with spoken content for emphasis). These originate from research often involving unnaturalistic paradigms where participants’ gestures during responses (e.g., retelling a narrative) are recorded. Approaching these types implicitly requires a stance on why we gesture; a conscious aim to communicate or an unconscious effort to orchestrate speech. Focus on them has led to the understudying of the interactive role gestures can play, where intersubjective acknowledgment and information transfer are central. This paper has two main aims: to profile the interactive role of gesturing as a proportion of all gesturing and to investigate its relevance for why humans gesture. We report data from 48 28-min dyadic conversations with a naïve participant and a confederate, varying interlocutors’ gesture visibility and conversation complexity. Our first, preregistered, analysis coded for the six traditional gesture categories, which resulted in ∼28% being uncodeable. Our second analysis asked whether these were interactive, which accounted for nearly 90% of uncoded gestures and a quarter of the entire database. Occluding gesture visibility significantly decreased the amount of interactive gestures participants made, resulting from a drop in interactive gestures made during simple conversations; complex topic interactive gesture frequency is stable between visually available and occluded conditions. Our data support both philosophies, but advocate for a subconscious account: that we gesture for the intrinsic motivations to express ourselves and to be understood.