On a recent weekday, I was stumped for ideas about what to make for dinner. So I opened my fridge, held the Camera Control button on my iPhone 16, snapped a photo and uploaded the image to ChatGPT and Google.
Both virtual assistants surprised me with their recommendations; Google suggested I make a salad with grapes and veggies, while ChatGPT noticed the pickles sitting on the top shelf and advised I add them to a wrap or sandwich.
Using your phone’s camera to find recipes may just be the start. If tech giants are to be believed, the cameras on our phones are going to play a much bigger role in everyday tasks. And soon.
Rather than just telling your phone what you want, you’ll be able to show your phone the world around you. Companies such as Apple, Google, OpenAI, Qualcomm and Meta seem to be moving in this trajectory, considering all of them introduced new camera-based AI features or concepts for smartphones and smart glasses in 2024.
Instead of just enabling you to snap photos and preserve memories, tech companies are exploring the idea of turning the camera into a sort of visual search engine. Point your device’s camera at a restaurant, and it’ll pull up key details like its operating hours and photos of the food, for example. In one of the biggest testaments to a camera-first future, Google, Samsung and Qualcomm unveiled Android XR in December. This new version of Android is designed to run on headsets and smart glasses and uses the camera and Google’s Gemini assistant to answer questions about your surroundings in real time.
It’s certainly a shift away from the touch-centric methods we use to operate our phones today. And while it may take some time to get used to — if consumers embrace it at all — analysts and tech firms believe it could represent the future of how we use our mobile devices.
“Camera and visual feedback, whether it’s uploading a picture of something or giving it your camera feed, is going to be really important going forward,” Google’s Seang Chau, vice president and general manager of the Android platform, said in a previous interview.
Read more: The Biggest Phones to Expect in 2025: iPhone 17, Galaxy S25 and More
Your phone’s camera is evolving with AI
Generative AI chatbots had a breakthrough in 2023 thanks to OpenAI’s ChatGPT. In 2024, tech companies set the stage for the next phase of AI helpers: multimodal AI agents. In plain English, that refers to AI-powered virtual assistants that can understand multiple types of input (i.e. text, speech and images) and handle tasks on your behalf.
Imagine scanning a restaurant bill with your phone’s camera and asking a virtual helper to split the cost between the party and add a tip. That technology isn’t quite there yet, but mobile chipmaker Qualcomm pointed to it as an example of a scenario that could be possible in the near future as AI agents advance.
In 2024, the building blocks of these more futuristic assistants started falling into place. Both OpenAI and Google made improvements to how their models and systems process multiple types of inputs. In December, OpenAI updated ChatGPT’s Advanced Voice Mode with the ability to share your phone’s video feed or screen with the digital assistant so that you could ask questions without having to upload photos. Android XR and Google’s Project Astra take that a step further by putting cameras closer to your line of sight — in headsets and glasses — so that Google’s assistant can do things like tell you about the landmark you’re viewing in Google Maps or provide a synopsis of the book you’re browsing in a bookshop.
In yet another sign that phone-makers are increasingly thinking of the camera as a discovery tool, Apple launched a new feature called Visual Intelligence in December. Only available on the iPhone 16 series, Visual Intelligence makes it possible to learn about the world around you by pressing the new Camera Control button.
Tap the button to launch the camera, and aim it at a point of interest to get more information about it. You can also snap a photo in this mode and send it to ChatGPT or Google for things like solving math problems or searching for products.
“Just imagine how many steps it saves us,” Nabila Popal, a senior director with the International Data Corporation’s data and analytics team, said in reference to camera-driven AI features. “Being able to research something, or find information on something, or add an event [to] a calendar without having to take those additional steps.”
But the question is whether people will actually want to use these features. A CNET survey in collaboration with YouGov found that 25% of smartphone owners don’t find AI features useful. While tech companies are leaning on AI to generate interest in new phones, it doesn’t seem to be enticing users to upgrade. Even though the global smartphone market is expected to have grown by 6.2% year-over-year in 2024, according to the IDC, growth is expected to slow in 2025 and beyond. And AI isn’t considered to be a driving force behind the surge in shipments in 2024, says the IDC’s report.
That’s partially because consumers aren’t familiar with the technology. But many of these AI features are still new and don’t feel essential to our phones yet. The iPhone 16, which Apple positioned as being the first phones “built for Apple Intelligence” launched without Apple’s marquee feature in September. Some of Apple Intelligence’s most significant additions, like ChatGPT integration, didn’t arrive until December.
It’s also uncertain how strong demand for the iPhone 16 has been since its launch. TF International Securities analyst Ming-Chi Kuo, who is known for making predictions about Apple products, reported in October that Apple had cut iPhone 16 orders by around 10 million units. A Morgan Stanley survey also indicates that lead times for the iPhone 16 were shorter than those for previous iPhones in the last five years, according to Apple Insider.
Those two data points could be taken as a potential sign that iPhone 16 demand was lower than Apple had expected, but they could also mean Apple’s production was in line with consumer demand. In its fiscal fourth quarter earnings, Apple reported that overall iPhone revenue grew by 6%, although it’s unclear how much of that was from the iPhone 16, since Apple doesn’t break out sales data for specific models.
How the iPhone 16 has been received by consumers is a crucial question, because it could determine the success of Apple Intelligence, given the iPhone 16 devices are among the only phones to support the technology.
Gerrit Schneemann, a senior analyst covering smartphones for Counterpoint Research, points to Samsung’s Galaxy S24 lineup as another example that AI isn’t a selling point for phones yet.
“A lot of upgraders, for example, for the S24 Ultra [were] coming from older Ultra devices,” he said. “So for us, that meant they’re upgrading because it’s time to upgrade, not necessarily because it [has] Galaxy AI.”
But perhaps the camera could play a role in changing that, potentially taking AI from being gimmicky to practical.
Smart glasses are coming next
While it makes sense for the cameras on our phones to be the first step towards an all-seeing AI assistant, these updates are also paving the path for smart glasses.
Smart glasses famously failed to gain traction with consumer audiences roughly a decade ago in the era of Google Glass. Back then, the search giant’s high-tech eyewear flopped for a few important reasons: It sparked privacy concerns, lacked use cases compelling enough to justify the high price and suffered from technical limitations such as short battery life and a narrow field of view.
But generative AI has put camera-equipped spectacles back in the spotlight. Sameer Samat, president of the Android ecosystem at Google, says advancements in AI have made this the right time to reexamine the feasibility of smart glasses. The company will soon be releasing prototype smart glasses with its Project Astra technology to testers to gather feedback, a sign that smart glasses may make a comeback in 2025.
“We were playing around with what these models can do using the phone, and the cameras on the phone as a way of interacting with the world, and it was truly blowing us away, what was possible,” Samat said.
Meta’s latest Ray-Ban glasses can already use AI to analyze what you’re seeing and provide answers in real time, and the tech giant just started rolling out always-on AI assistance to those who are part of Meta’s early access program. That means Meta’s AI helper will be able to continuously listen so that you don’t have to keep prompting it each time you want to ask a question. Google’s prototype smart glasses work similarly; it’ll passively listen for input once you activate Gemini until you pause it.
When trying the Google prototype in December, CNET’s Scott Stein walked around a demo room in Google’s offices asking about various elements — from books on a shelf to a Nespresso coffee machine — without having to constantly invoke the assistant.
In my own experience trying the glasses, I asked questions such as whether the plant I was looking at was ideal for indoor environments and received an answer in my ears almost instantly. Today, most people would probably take the extra steps of snapping a photo of the plant, uploading it to Gemini or ChatGPT, and then asking the question.
“It isn’t always the most natural [thing] to be holding your phone up to everything,” Samat said when talking about the inspiration behind Android XR. “Wouldn’t this be perfect for a pair of glasses? That leads to us thinking about glasses.”
Meanwhile, OpenAI CEO Sam Altman and former Apple design chief Jony Ive are collaborating on an AI-powered computing device that’s “less socially disruptive than the iPhone,” according to The New York Times. While little is known about the product, the project is another sign that a wave of new AI devices is likely arriving soon.
That comes after gadgets like the Humane AI Pin and Rabbit R1, both of which run on AI-powered software and leverage the camera to answer questions about one’s surroundings, got off to a rocky start in 2024. Those devices were widely panned for not living up to expectations, malfunctioning at launch and generally failing to be as intuitive or helpful as a smartphone, although both have been updated significantly in recent months.
It’s unclear exactly what devices of the future will look like, and what’s even less certain is whether any consumer tech product will be as impactful and helpful as the devices we already carry in our pockets. But if you want a peek at where things are going, there’s a good chance it all starts with the camera on the phone you own today.
Eventually, generative AI features — whether they leverage the camera or not — will feel so essential to mobile devices that phones without the technology may feel archaic or irrelevant, according to Popal. She likens it to the arrival of the internet and app stores on our phones.
“The old smartphone,” she says, “will just seem so not smart.”
+ There are no comments
Add yours