AI Models Rival Human Ability to Recognize Emotions in Voice
-
Machine learning models can identify emotions from 1.5 second audio clips with accuracy similar to humans, challenging the belief that emotion recognition is solely a human capability.
-
Deep neural networks (DNNs) and a hybrid model (C-DNN) were most effective at classifying emotions like joy, anger, sadness, and fear.
-
The findings demonstrate the potential for developing real-time emotion recognition technology for applications in therapy, communication devices, and more.
-
Limitations include the use of actor-voiced sentences, suggesting further research on spontaneous emotions is needed.
-
The models showed promise for integration in artificial intelligence to improve emotional intelligence and human-computer interaction.