Artificial Intelligence Detects Early Alzheimer’s In Voice

New technologies that can capture subtle changes in a patient’s voice may help physicians diagnose cognitive impairment and Alzheimer’s disease before symptoms begin to show, according to a UT Southwestern Medical Center researcher who led a study published in the Alzheimer’s Association publication Diagnosis, Assessment & Disease Monitoring.

Prior to the development of machine learning and NLP, the detailed study of speech patterns in patients was extremely labor intensive and often not successful, because the changes in the early stages [of Alzheimer’s] are frequently undetectable to the human ear,” said lead study author Ihab Hajjar, MD, a professor of neurology at UT Southwestern’s Peter O’Donnell Jr. Brain Institute, in a news release. “This novel method of testing performed well in detecting those with mild cognitive impairment and more specifically in identifying patients with evidence of Alzheimer’s disease — even when it cannot be easily detected using standard cognitive assessments.”

Dr. Hajjar and his collaborators collected data on 206 people aged 50 and older, 114 who met the criteria for mild cognitive decline and 92 who were cognitively unimpaired. Each person’s cognitive status was determined through standard testing. Study subjects were also recorded as they gave a one- to two-minute description of a colorful circus procession. Using sophisticated computer analysis of these recordings, scientists could determine and evaluate specific types of speech features, including: how fast a person talkspitchvoicing of vowel and consonant sounds, grammatical complexityspeech motor control and idea density.

The research team also examined cerebral spinal fluid samples for amyloid beta protein. One form called amyloid beta peptide 42, for example, is especially toxic, according to the National Institute on Aging, and plays a significant role in Alzheimer’s disease. A total of 40 cognitively unimpaired and 63 impaired individuals were found.