When David DuBois, a US Army veteran and retired director of physical security for the US Capitol Police, was diagnosed with ALS in 2022, his family spent over 24 months without hearing him speak.
Voice AI company WellSaid was able to use less than 40 minutes of old voicemails and videos to create a custom voice that matched DuBois’ authentic voice — one with emotion and humor, including his favorite phrase, “you’re killin’ me, Smalls.”
Ultimately, WellSaid says, it wants to focus on artificial intelligence to augment humanity, rather than replace humans.
While working at Paul Allen’s Ai2 Institute, a nonprofit research center that explores the possibilities of artificial intelligence, Mixail Petrochuk developed an algorithm for a realistic AI voice — just three months after graduating from the University of Washington. There, Petrochuck met Matt Hocking, future co-founder of WellSaid.
Petrochuk, who identifies as autistic, was inspired to turn his challenges into opportunities. WellSaid was one of the ways he sought out to make a positive impact in the world.
“I was born with more wires in my brain than other people. This means my brain works overtime, thinking, processing and feeling,” Petrochuk said. “I often bring ideas that many people miss. I notice patterns throughout my work that I draw on to make critical insights.”
WellSaid is pushing hard for AI responsibility initiatives in the text-to-speech space. When reports of the risks of AI became more widely known, WellSaid had already established and run programs around revenue sharing, content moderation and voice actor anonymity.
When WellSaid launched in 2018, it was originally built to help educators create informative content. Today, WellSaid is used to inform and support millions of people — including voice actors, elderly individuals and disabled customers and related organizations, like Audible Sight, which provides closed captioning to the blind with a realistic, human-like voice.
WellSaid’s competitors and differentiation
WellSaid competitors include ElevenLabs and Murf AI, yet WellSaid focuses on a tightly controlled training model that doesn’t use public, open-source data.
Companies like ElevenLabs were founded on the desire to translate text and languages in a way that seamlessly provided realistic speech. ElevenLabs, similar to WellSaid, works for patients with degenerative diseases like ALS. And Murf AI has a Voice Data Sourcing option that pays you for submitting your voice recording.
But with open-source data, whether you pay $5 to try out an audio doppelganger or submit to get paid for your voice recording, having autonomy over your likeness isn’t necessarily an option.
After all, it’s only fair to worry about the misuse of AI-generated voices. Remember the OpenAI-Scarlett Johansson case? Or the AI-faked robocall impersonations of President Joe Biden that the FCC declared illegal?
With the influx of media coverage and lawsuits over AI-generated broken trust and safety policies, WellSaid’s private sourcing of data isn’t just a smarter decision — it’s a necessary one.
While WellSaid says it doesn’t glean millions of voices from the internet but prefers instead to focus on its mission of AI for good — in this scenario, high-quality output and permission to use the voices on its platform.
“All of our voices are sourced actors,” Cook said, WellSaid’s CEO. “We’ve recorded them in professional environments. We’ve vetted them, we have their approval. We pay them, we pay them a royalty. We pay them for their time and training. We pay them a royalty ongoing.”
How ‘ethical’ AI can support the human experience
When it comes to using AI, there remains the question of whether people want artificial intelligence tools in their daily lives. According to a CNET survey published in September based on data collected by YouGov, 25% of respondents said they don’t find AI tools helpful and don’t want them integrated in their phones.
And 34% are concerned about privacy when using AI on devices, while 45% said they wouldn’t pay for a subscription for AI tools.
Cook says that comfort and trust play a huge role in how humans decide to interact with AI. He believes AI will end up being woven into daily life and will allow people to interact with the technology and make decisions about it from personal experience.
So is there a world where ethical AI exists — comfortably — in the homes of everyday humans?
“Thinking about it as a tool to help us do things that we can’t do today, I feel pretty good about the role of AI in preventing illness and spreading disease or high-quality health care to those who are disadvantaged or removed,” Cook said.
“I think we’ll look back in 30, 40, 50 years and say, ‘This was groundbreaking.’ This is a really big deal to make life better for a lot of people.”
+ There are no comments
Add yours