American company Hume launches a voice interface capable of empathy, powered by a very specific language model.
This type of conversational AI generates voice responses that adapt to the mood of the human interacting with it to provide a much more natural experience than any other voice assistant.
Hume introduces an Empathetic Voice Interface (EVI), a first-of-its-kind conversational AI with emotional intelligence that can analyze voice tones to understand when users are done speaking and optimize responses for user satisfaction.
Instead of having mechanical and less natural responses, a little like when you use ChatGPT's oral tool, here you have a more immersive conversation, which also takes into account the background of the form when you address it.
Every time you speak to it, the interface analyzes your voice, whether it's insistent, interested, focused, bored, calm or satisfied, and responds with the appropriate tone. The result gives the impression of chatting with a human. The more curious can actually try it out (in English only): demo.hume.ai.
If it is indeed available for testing, the technology will be officially launched in April 2024. Developers will then be able to integrate it into their projects. Its uses are relatively easy to understand.