Ticker

6/recent/ticker-posts

Autistic people demonstrate speech rhythm differences that are consistent across languages, study finds




According to research using machine learning, some of the speech variations linked to autism are universal across languages while others are language-specific. The study, which was written up in the journal PLOS One, used different groups of Cantonese and English speakers.

Speech prosody problems frequently accompany autism spectrum disorder (ASD). Speech prosody refers to linguistic features, such as rhythm and intonation, that enable us to convey meaning and emote with our words. A person's communication and social skills may be hampered by unusual speech prosody, for instance by the possibility of miscommunication with others or misinterpretation of oneself. Uncertainty surrounds the cause of these speech variations that are frequently seen in autistic persons.

Author of the study Joseph C. Y. Lau and his team sought to clarify this issue by examining prosodic traits linked to autism in two typologically different languages.

As a multilingual speech scientist who was raised abroad, I've always been fascinated by how exposure to many cultures and languages shapes people. A cross-cultural and cross-linguistic approach might provide us with a wealth of unique insights when studying autism, according to Lau, a research assistant professor at Northwestern University and part of Molly Losh's Neurodevelopmental Disabilities Laboratory.

The majority of research in this area has focused on English-speaking populations, however prosody differs between languages. There is evidence to suggest that the prosodic traits linked to autism are also language-specific. Lau and his colleagues looked at the characteristics of speech prosody that consistently correlate with autism across languages and those that do not.

Cantonese speakers from Hong Kong and native English speakers from the United States participated in the research. 55 of the subjects in the English-speaking group were autistic, and 39 were neurotypical. There were 24 neurotypical and 28 autistic participants in the Cantonese group. Each participant was required to tell the tale of a picture book without any words. Their speech was captured, written down, and then separated into individual utterances for additional analysis.

From the story excerpts, the researchers utilized a computer software to extract the speech intonation and rhythm. While intonation relates to differences in voice pitch, rhythm refers to variations in time and volume of speech. The researchers next attempted to distinguish between people with autism spectrum disorders and those with usual development by using machine learning, a method that makes use of computer systems to evaluate and interpret data.

The results showed that speech rhythm could accurately distinguish between neurotypical and autistic participants in both the English and Cantonese samples. However, in the English sample, speech intonation could only distinguish between autistic and neurotypical subjects. Additionally, when the researchers looked at a combined dataset of Cantonese and English speakers, they discovered that the only factor that accurately distinguished autistic people from neurotypical participants was speech rhythm.

These findings suggest that the machine learning system might identify between speakers who are autistic and those who are neurotypical based on characteristics of speech rhythm. This, according to the authors, is consistent with other studies that claim autistic persons exhibit distinct stress patterns, speaking rates, and volume levels. Furthermore, the results imply that these variations hold true across two different languages.

"We can see there are features that are strikingly common in autistic individuals from different parts of the world; meanwhile, there are also some other features of autism that are manifested differently, as shaped by their language and culture," Lau told PsyPost. "We use an AI-based analytic method to study features of autism across languages in an objective and holistic way.

In our study, we discovered that speech rhythm, or the regularity of speech patterns, exhibited such similarities, although intonation, or the fluctuation of pitch when we talk, revealed variances across linguistic boundaries. Finding shared characteristics may open a door to investigating the complex molecular underpinnings of autism, which profoundly impact language and behavior in a consistent manner among autistic individuals worldwide. On the other hand, aspects that differ between cultures or languages may represent characteristics of autism that are more easily modified by experience, which may reflect prospective areas that might benefit from clinical intervention.

Notably, intonation did not indicate an autism diagnosis in the Cantonese group and only did so in the English sample. Cantonese is a tone language, meaning that words may have their meanings altered by pitch, according to the study's authors, who speculate that this may be the case. The authors speculated that the widespread use of linguistic pitch in tone languages may have a compensating effect that lessens the effects of intonational disparities in ASD. This may indicate that speech therapies that concentrate on pitch and intonation can help autistic persons who speak non-tonal languages, while more study is required in this area.

According to Lau, "We feel the cross-linguistic similarity of rhythmic patterns of autistic speech included in our work points to an interesting follow-up field of enquiry: if rhythm is a potentially universal component of ASD that is less changeable by experience than language." "Testing this theory will need for many additional languages, including languages from other language families all across the world, and a considerably bigger sample size. We look forward to growing our research initiatives and building global partnerships that would eventually enable such an examination.

The researcher continued, "Although the direct benefits of this study to the autism community may appear modest at this time, we do hope that, beyond its theoretical implications, this machine learning study can inspire future scientific and technological advancements that can provide more direct benefits to the autism community, such as in the area of AI-assisted healthcare.


The study, “Cross-linguistic patterns of speech prosodic differences in autism: A machine learning study”, was authored by Joseph C. Y. Lau, Shivani Patel, Xin Kang, Kritika Nayar, Gary E. Martin, Jason Choy, Patrick C. M. Wong, and Molly Losh.