Apple’s latest developer betas launched last week with a handful of the generative AI features that were announced at WWDC and are headed to your iPhones, iPads, and Macs over the next several months. On Apple’s computers however, you can actually read the instructions programmed into the model supporting some of those Apple Intelligence features.
They show up as prompts that precede anything you say to a chatbot by default, and we’ve seen them uncovered for AI tools like Microsoft Bing and DALL-E before. Now a member of the macOS 15.1 beta subreddit posted that they’d discovered the files containing those backend prompts. You can’t alter any of the files, but they do give an early hint at how the sausage is made.
In the example above, an AI bot for a “helpful mail assistant” is being told how to ask a series of questions based on the content of an email. It could be part of Apple’s Smart Reply feature, which can go on to suggest possible replies for you.
Screenshot: Wes Davis / The Verge
This sounds like Apple’s “Rewrite” feature, one of the Writing Tools that you can access by highlighting text and right-clicking (or, in iOS, long-pressing) on it. Its instructions include passages saying “Please limit the answer within 50 words. Do not hallucinate. Do not make up factual information.”
Screenshot: Wes Davis / The Verge
This brief prompt summarizes emails, with a careful instruction not to answer any questions.
Screenshot: Wes Davis / The Verge
I’m pretty certain that this is ajax’s instruction set for generating a “Memories” video with Apple Photos. The passage that says ”do not write a story that is religious, political, harmful, violent, sexual, filthy or in any way negative, sad or provocative” might just explain why the feature rejected my prompt asking for “images of sadness”:
A shame. It’s not hard to get around, though. I got it to generate a video in response to the prompt, “Provide me with a video of people mourning.” I won’t share the resulting video because there are pictures of people who aren’t me in it, but I will show you the best picture it included in the slideshow:
There are far more prompts contained in the files, all laying out the hidden instructions given to Apple’s AI tools before your prompt is ever submitted. But here’s one last instruction before you go:
Files I browsed through refer to the model as “ajax,” which some Verge readers might recall as the rumored internal name for Apple’s LLM last year.
The person who found the instructions also posted instructions on how to locate the files within the macOS Sequoia 15.1 developer beta.
Expand the “purpose_auto” folder and you should see a list of other folders with long, alphanumberic names. Inside most of those, you’ll find AssetData folder containing “metadata.json” files. Opening them should show you some code and — occasionally, at the bottom of some of them — the instructions passed to your machine’s local incarnation of Apple’s LLM. But you should remember these live in a part of macOS that contains the most sensitive files on your system. Tread with caution!
+ There are no comments
Add yours