Scientists can now identify signs of depression just by listening to your voice

Researchers from the University of Southern California (USC) developed a tool – called SimSensei for diagnosing depression just by listening to your voice and analyzing speech patterns. This tool works by using a machine learning algorithm to find vowel sounds associated with depression. General negativity in language, with a rise in the number of words like ‘hate’ ‘miserable’ ‘disappointed’, increased use of the word ‘I’ and a jump in the number of expletives are all clues that a new mum will suffer post-natal depression. It is designed to work alongside doctors as they assess patients.

SimSensei uses an algorithm developed in 1967 called the k-means algorithm, which can put large data sets into clusters based on average values, which can then be compared against ‘normal’ speech patterns according to Michael Byrne.


In a new study, the researchers ran their algorithm on 253 volunteers, who were also asked to fill out a self-assessment questionnaire. “The experiments show a significantly reduced vowel space in subjects that scored positively on the questionnaires,” the authors report. “These findings could potentially support treatment of affective disorders, like depression and PTSD in the future.”

According to A study from 2009 revealed that only about 50 percent of people with depression were correctly diagnosed by their doctors. Having a digital assistant on hand could be hugely helpful for doctors .SimSensei proves itself a very useful tool for doctors and researchers.

SimSensei team says it wants to use the algorithm to see if disorders such as schizophrenia and Parkinson’s can be diagnosed as well – so it’s possible AI-assisted diagnoses could become an important part of human health services in the future.

Source – IEEE Transactions on Affective Computing

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.