OpenAI rolls out advanced Voice Mode and no, it won’t sound like ScarJo

Estimated read time 2 min read


OpenAI has started rolling out its advanced Voice Mode feature. Starting today, a small number of paying ChatGPT users will be able to have a tete-a-tete with the AI chatbot. All ChatGPT Plus members should receive access to the expanded toolset by the fall of this year.

In an announcement on X, the company said this advanced version of its Voice Mode “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.”

Support for voice conversations arrived last September in ChatGPT and the more advanced version got a public demo in May. ChatGPT-4o uses a single multimodal model for the voice capabilities rather than the three separate models used by its previous audio solution, decreasing the latency in conversations with the chatbot.

OpenAI drew a lot of criticism at the May demo for debuting a voice option that sounded uncannily like Scarlett Johansson, whose acting career included voicing AI character Samantha in Spike Jonze’s film Her. The release date for advanced Voice Mode was delayed shortly after the backlash. Even though the company insisted that the voice actor was not imitating Johansson’s performance, the similar-sounding voice was since taken out.





Source link

You May Also Like

More From Author

+ There are no comments

Add yours