AI voice analysis gives suicide hotline workers an emotional dashboard

Research into suicidal speech dates back more than 30 years with studies identifying objective acoustic markers that can be used to differentiate various mental states & psychiatric disorders

An AI model accurately tracks emotions like fear and worry in the voices of crisis line callers, according to new research. The model’s developer hopes it can provide real-time assistance to phone operators as they work to prevent suicides.

Screening callers of crisis or suicide helplines for their current level of suicide risk is crucial to detecting and preventing suicide.

Speech conveys useful information about a person’s mental and emotional state, providing clues about whether they’re sad, angry or fearful. Research into suicidal speech dates back more than 30 years, with studies identifying objective acoustic markers that can be used to differentiate various mental states and psychiatric disorders, including depression.

However, identifying suicide risk from someone’s speech can be challenging for the human listener, because callers to these hotlines are about as emotionally unstable as people get, and their speech characteristics can change rapidly.

See Also:

Crime in Northern Epirus – Elderly Greek couple found dead in Dropolis

Perhaps a real-time emotional ‘dashboard’ might help. Alaa Nfissi, a PhD student from Concordia University in Montreal, Canada, has trained an AI model in speech emotion recognition (SER) to aid in suicide prevention. He presented a paper about his work at this year’s IEEE International Conference on Semantic Computing in California, where it won the award for Best Student Paper.

“Traditionally, SER was done manually by trained psychologists who would annotate speech signals, which requires high levels of time and expertise,” said Nfissi. “Our deep learning model automatically extracts speech features that are relevant to emotion recognition.”

Continue here: New Atlas