Welcome to Part 3 of my AAAS 2019 conference series! For more background on the conference, see the first post in this series.
This post will be structured a little differently because the event was a topical lecture instead of a multi-presenter panel. This research was partially conducted by and entirely presented by Dr. Susan Goldin-Meadow, Professor of Psychology at the University of Chicago.
Gesture as a Shortcut to Thought
From the AAAS 2019 session The Gestural Origins of Language and Thought
New Jargon I Learned:
- Homesign: The rudiments of a gestural language that deaf children use to communicate before they learn a standardized sign language.
- Deaf children all over the world will invent their own homesigns!
- Gesture-Speech Mismatch: A phenomenon that results when someone’s speech is conveying one message but their gestures are conveying another.
Overall Theme: There is a fundamental difference between gesture and communication, even if that communication is being accomplished via signing. Gesture is a more foundational link to how our brains are actually processing material than speech, and gestures both reflect and change what we know.
3 Interesting Study Topics in This Field:
- Homesign has been studied extensively by linguists and psychologists, as it is a unique window into how language develops. Researchers in this field once hypothesized that deaf children who use homesign were picking up the gestures of their hearing caretakers and incorporating those gestures into their homesign. However, if you map out the grammatical structures of homesign, you find that it has complex sentences and complex noun phrases that aren’t present in the gestures of their caretakers. This indicates that homesign is developed independently of hearing caretakers’ gestures.
- Another piece of the puzzle is the presence of distinct stages of a developing sign language. These stages were observed during the development of Guatemalan Sign Language (which was only standardized a few decades ago). First, every deaf child goes through the process of inventing their own homesign. At some point in that child’s life, they encounter others who are deaf and have their own homesigns, and a process of collaborative invention begins. However, research has shown that there are some elements of language which are never defined or delineated until the language has grown mature enough to be taught to a new generation. The transmission of a language to a new generation of signers actually produces linguistic alterations in the standardized form – there are parts of the language that are not invented until this stage!
- In another study, deaf children and hearing children were tasked with solving a math problem. Many children in both groups got it wrong. When they were asked to explain their reasoning to a researcher, most children would use gestures along with their words to help communicate their thought process. In most children, there was a gesture-speech match – their gestures illustrated the words they were using to describe their method. But in some children, their words would illustrate the (incorrect) method that they actually used, while their gestures showed a different, correct method! When both groups were taught how to do the problem correctly, these children with a gesture-speech mismatch correctly solved the next problem at much higher rates than those with a gesture-speech match! Even deaf students showed these same results. There are many possible reasons for this behaviour: gesture could be deeper-seated than language, so it could link the concrete action and the representation better than words. The mismatch could also be a tell-tale sign of a lack of confidence in the answer, creating students who are more willing to learn.
Fun Fact I Learned: Gesturing is natural even for people who are blind (and have never seen another person gesture in their life) or deaf (in an act entirely separate from signed communication).
Application: These studies (especially the one about gesture-speech mismatches) can help us improve education. Gesture lets us express ideas in an imagistic manner, while words let us express ideas in a categorical manner. Having an understanding of a topic on both levels will lead to deeper learning. From what I gathered here, allowing students to learn kinesthetically (ex. practicing explaining topics on a blackboard, a setting with natural gesturing) may prove more effective than solely stationary methods.