During last week’s introduction of Apple Intelligence — Apple’s artificial intelligence initiative — software engineering head Craig Federighi announced that the company will run some generative AI models in a secure cloud computing environment when the models require extra horsepower.
Called Private Cloud Compute (PCC), the service will be subject to scrutiny by outside security experts. Said Federighi: “Just like your iPhone, independent experts can inspect the code that runs on these servers to verify this privacy promise.” The point: To verify certain privacy promises by Apple, including that user data will never be stored on PCC servers, but will be expunged from memory once a request is fulfilled.
Also: Here’s how Apple’s keeping your cloud-processed AI data safe (and why it matters)
Federighi did not go into detail about how the PCC servers will be inspected or audited by security researchers, but a subsequent blog post by Apple technical teams states the PCC servers will run a distinct version of the company’s operating system software that researchers will be allowed to inspect.
“When we launch Private Cloud Compute, we’ll take the extraordinary step of making software images of every production build of PCC publicly available for security research,” states a post by the Apple Security Engineering and Architecture and collaborating teams.
The article goes on to say that Apple will “periodically also publish a subset of the security-critical PCC source code, [and] in a first for any Apple platform, PCC images will include the sepOS firmware and the iBoot bootloader in plaintext, making it easier than ever for researchers to study these critical components.”
Apple emphasizes that its devices “will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software” as a means to ensure its privacy and security guarantees.
Little detail was provided about the nature of the server software, other than the fact that it is a derivation of the iOS and MacOS operating systems.
The servers will run on Apple’s own computer chips, analogous to iPhone and iPad, and Mac, and, “We paired this hardware with a new operating system: a hardened subset of the foundations of iOS and macOS tailored to support large language model (LLM) inference workloads while presenting an extremely narrow attack surface. This allows us to take advantage of iOS security technologies such as Code Signing and sandboxing.”
Also: Apple’s AI extravaganza left out 2 key advances – maybe next time?
Apple’s iOS and macOS are based on a combination of open-source technologies such as the Darwin operating system, developed at Apple in the 1990s, and freeBSD, and closed-source system software developed at Apple.
It’s uncertain when developers will get a look at the new software.
In the blog post, Apple researchers say they will give security researchers a “first look” at the software “soon.” A note on Apple’s developer site says Apple Intelligence will be available “in an upcoming beta” without mentioning anything specific about PCC timing.
ZDNET’s Maria Diaz speculates that iOS 18 betas will become available in July, although Apple’s Web site states in a footnote that “Apple Intelligence will be available in beta on iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac with M1 and later, with Siri and device language set to US English, as part of iOS 18, iPadOS 18, and macOS Sequoia this fall.”
+ There are no comments
Add yours