AI overshadowed Pixel at the Pixel event

Estimated read time 4 min read


Google’s Tuesday event was ostensibly about Pixel hardware. Really, it was about AI.

Google’s Rick Osterloh made that clear from the moment he walked onstage, where his initial remarks focused a lot more on Google’s artificial intelligence efforts than devices:

A few months ago at Google I/O, we shared a broad range of breakthroughs to make AI more helpful for everyone. We’re obsessed with the idea that AI can make life easier and more productive for people. It can help us learn. It can help us express ourselves. And it can help us be more creative. The most important place to get this right is in the devices we carry with us every day. So we’re going to share Google’s progress in bringing cutting-edge AI to mobile in a way that benefits the entire Android ecosystem.

For the first 25 minutes of the show, Osterloh and his colleagues didn’t make any announcements about the Pixel 9 lineup, the Pixel Watch 3, or the Pixel Buds Pro 2. Instead, they highlighted things like Google’s investments in its tech stack and Tensor chips, how all six of its products with more than 2 billion monthly users (Search, Gmail, Android, Chrome, YouTube, and Google Play) harness the company’s Gemini AI models in some way, and how Gemini and Google’s AI tools are integrated with other Android phones that you can already buy. Even before showing demos on its phones, Google was showing its AI tools onstage on phones from Samsung and Motorola.

Google also used this pre-hardware section to show off what was arguably the most interesting segment of the event: a spotlight on Gemini Live, a tool that lets you have more natural back-and-forth conversations with Gemini for things like brainstorming or practicing for an interview. (To me, it felt like Google’s response to OpenAI’s impressive GPT-4o demo way back before I/O.) And Gemini Live isn’t even a Pixel-exclusive feature; it’s rolling out as of Tuesday for people subscribed to Gemini Advanced and using Android.

When Google finally got around to talking about its new hardware, AI was everywhere there, too. Gemini can respond to what’s on your phone screen. “Add Me” can add the person who is taking group photos into a picture. The Pixel Watch uses AI to help detect your pulse now. Google even envisions that you’ll talk with Gemini Live while using the new Pixel Buds Pro 2.

And just when it felt like Osterloh was about to wrap up, he shared a few more AI announcements about things that are coming further down the line. Google plans to let you share your camera during a Gemini Live conversation so that Gemini can respond to what you’re looking at. You’ll be able to connect apps to Gemini Live, too. And Gemini will be able to make research reports by searching things on the web for you — a feature that Osterloh says is coming to Gemini Advanced users in the “coming months.”

If you’ve been following Google as of late, this focus on AI isn’t a huge surprise. But with Tuesday’s event, it’s clear that Google views AI as its key competitive differentiator for its hardware and the best way to take on giants like Apple and Samsung. Based on what we saw, it sure seems like Google’s phones could have some more impressive AI tricks than what Apple is working on — and I’m sure Google has been happy to hear that Apple’s most advanced Apple Intelligence features aren’t expected to arrive until next year.

I’m still skeptical of Google’s AI flashiness. Sure, some of the photo features seem cool, but I don’t trust Gemini enough to have a full conversation with it. Will other people care? And will they care enough to buy Pixels or sign up for Gemini Advanced? I’m not so sure. But with Google’s new hardware launching over the course of this month and next, we won’t have to wait long to see if the company’s attention to AI will win over new buyers.





Source link

You May Also Like

More From Author

+ There are no comments

Add yours