Apple’s big AI rollout at WWDC will reportedly focus on making Siri suck less

Estimated read time 3 min read


Apple will reportedly focus its first round of generative AI enhancements on beefing up Siri’s conversational chops. Sources speaking with The New York Times say company executives realized early last year that ChatGPT made Siri look antiquated. The company allegedly decided that the large language model (LLM) principles behind OpenAI’s chatbot could give the iPhone’s virtual assistant a much-needed shot in the arm. So Apple will reportedly roll out a new version of Siri powered by generative AI at its WWDC keynote on June 10.

Apple Senior Vice Presidents Craig Federighi and John Giannandrea reportedly tested ChatGPT for weeks before the company realized that Siri looked outdated. (I would argue that the epiphany came about a decade late.) What followed was what The NYT describes as Apple’s “most significant reorganization in more than a decade.”

The company sees generative AI as a once-in-a-decade tentpole area worth shifting heaps of resources to address. You may recall the company canceled its $10 billion “Apple Car” project earlier this year. Apple reportedly reassigned many of those engineers to work on generative AI.

Apple executives allegedly fear AI models could eventually replace established software like iOS, turning the iPhone into “a dumb brick” by comparison. The clunky, awkward and overall unconvincing first wave of dedicated AI gadgets we’ve reviewed, like the Human AI Pin and Rabbit R1, aren’t good enough to pose a threat. But that could change as software evolves, other smartphone makers incorporate more AI into their operating systems and other hardware makers have a chance to innovate.

So, at least for now, it appears Apple isn’t launching direct competitors to generative AI stalwarts like ChatGPT (words), Midjourney (images) or ElevenLabs (voices). Instead, it will start with a new Siri and updated iPhone models with expanded memory to better handle local processing. In addition, the company will reportedly add a text-summarizing feature to the Messages app.

Apple’s John Ternus standing in front of a digital slide of the M4 chip.Apple’s John Ternus standing in front of a digital slide of the M4 chip.

Apple’s M4 chip (shown next to VP John Ternus) could help process local Siri requests. (Apple)

Apple’s first foray into generative AI, if The NYT’s sources are correct, sounds like less of an immediate threat to creators than some had imagined. The company ran a video plugging the new iPad Pro at its May iPad event. The clip accidentally served as the perfect metaphor for the (legitimate) fears of artists, musicians and other creators, whose work AI models have trained on — and who stand to be replaced by those same tools as they become more normalized for content creation.

This week, Apple apologized for the ad and said it canceled plans to run it on TV.

Samsung and Google have already loaded their flagship phones with various generative AI features that go far beyond improving their virtual assistants. These include tools for editing photos, generating text and enhancing transcription (among other things). These features typically rely on cloud-based servers for processing, whereas Apple’s approach will allegedly prioritize privacy and handle requests locally. So Apple will apparently start with a more streamlined approach that sticks to improving what’s already there, as well as keeping most or all processing on-device.

The New York Times’ sources add that Apple’s culture of internal secrecy and privacy-focused marketing have stunted its AI progress. Former Siri engineer John Burkey told the paper that the company’s tendency to silo off the information various divisions share with each other has been another primary culprit in Siri’s inability to evolve far past where the assistant was when it launched a day before Steve Jobs died in 2011.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours