Yet Google and its hardware partners argue privacy and security are a major focus of the Android AI approach. VP Justin Choi, head of the security team, mobile eXperience business at Samsung Electronics, says its hybrid AI offers users âcontrol over their data and uncompromising privacy.â
Choi describes how features processed in the cloud are protected by servers governed by strict policies. âOur on-device AI features provide another element of security by performing tasks locally on the device with no reliance on cloud servers, neither storing data on the device nor uploading it to the cloud,â Choi says.
Google says its data centers are designed with robust security measures, including physical security, access controls, and data encryption. When processing AI requests in the cloud, the company says, data stays within secure Google data center architecture and the firm is not sending your information to third parties.
Meanwhile, Galaxyâs AI engines are not trained with user data from on-device features, says Choi. Samsung âclearly indicatesâ which AI functions run on the device with its Galaxy AI symbol, and the smartphone maker adds a watermark to show when content has used generative AI.
The firm has also introduced a new security and privacy option called Advanced Intelligence settings to give users the choice to disable cloud-based AI capabilities.
Google says it âhas a long history of protecting user data privacy,â adding that this applies to its AI features powered on-device and in the cloud. âWe utilize on-device models, where data never leaves the phone, for sensitive cases such as screening phone calls,â Suzanne Frey, vice president of product trust at Google, tells WIRED.
Frey describes how Google products rely on its cloud-based models, which she says ensures âconsumer’s information, like sensitive information that you want to summarize, is never sent to a third party for processing.â
âWeâve remained committed to building AI-powered features that people can trust because they are secure by default and private by design, and most importantly, follow Googleâs responsible AI principles that were first to be championed in the industry,â Frey says.
Apple Changes the Conversation
Rather than simply matching the âhybridâ approach to data processing, experts say Appleâs AI strategy has changed the nature of the conversation. âEveryone expected this on-device, privacy-first push, but what Apple actually did was say, it doesnât matter what you do in AIâor whereâitâs how you do it,â Doffman says. He thinks this âwill likely define best practice across the smartphone AI space.â
Even so, Apple hasnât won the AI privacy battle just yet: The deal with OpenAIâwhich sees Apple uncharacteristically opening up its iOS ecosystem to an outside vendorâcould put a dent in its privacy claims.
Apple refutes Muskâs claims that the OpenAI partnership compromises iPhone security, with âprivacy protections built in for users who access ChatGPT.â The company says you will be asked permission before your query is shared with ChatGPT, while IP addresses are obscured and OpenAI will not store requestsâbut ChatGPTâs data use policies still apply.
Partnering with another company is a âstrange moveâ for Apple, but the decision âwould not have been taken lightly,â says Jake Moore, global cybersecurity adviser at security firm ESET. While the exact privacy implications are not yet clear, he concedes that âsome personal data may be collected on both sides and potentially analyzed by OpenAI.â
+ There are no comments
Add yours