The Systematicity of Silent Gesture and Its Relation to Sign Languages

Published on August 28, 2023

Silent gestures have long been viewed as distinct from spoken and sign languages, lacking the compositional structure of linguistic systems. However, recent research challenges this notion by highlighting the consistent strategies used in representing objects and events through silent gestures. In this study, untrained gesturers demonstrated a systematic manipulation of gesture form when representing events with and without a theme, indicating transitivity and intransitivity. By analyzing handshape features using machine learning techniques, researchers were able to accurately predict transitivity distinctions in silent gestures. These findings suggest that there is a cohesive structure underlying silent gestures and that handshape features may be derived from cognitive systems involved in manual action. Furthermore, it is proposed that nonsigners tend to assign event participants to each hand, a strategy observed across various sign languages. This suggests that such a cognitive foundation exists regardless of genetic or geographic factors influencing sign language development. For more detailed information on this fascinating research, check out the full article!

Abstract
Silent gesture is not considered to be linguistic, on par with spoken and sign languages. It is claimed that silent gestures, unlike language, represent events holistically, without compositional structure. However, recent research has demonstrated that gesturers use consistent strategies when representing objects and events, and that there are behavioral and clinically relevant limits on what form a gesture may take to effect a particular meaning. This systematicity challenges a holistic interpretation of silent gesture, which predicts that there should be no stable form-meaning correspondence across event representations. Here, we demonstrate to the contrary that untrained gesturers systematically manipulate the form of their gestures when representing events with and without a theme (e.g., Someone popped the balloon vs. Someone walked), that is, transitive and intransitive events. We elicited silent gestures and annotated them for manual features active in coding transitivity distinctions in sign languages. We trained linear support vector machines to make item-by-item transitivity predictions based on these features. Prediction accuracy was good across the entire dataset, thus demonstrating that systematicity in silent gesture can be explained with recourse to subunits. We argue that handshape features are constructs co-opted from cognitive systems subserving manual action production and comprehension for communicative purposes, which may integrate into the linguistic system of emerging sign languages. We further suggest that nonsigners tend to map event participants to each hand, a strategy found across genetically and geographically distinct sign languages, suggesting the strategy’s cognitive foundation.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>