OpenAI recently unveiled a new feature for ChatGPT called “memory”, which stores things you explicitly ask the program for later use. This feature can be a way to make anything your build with ChatGPT, be it essays, resumes, or code, more attuned to your preferences.
The memory function is being slowly rolled out to the ChatGPT user base. OpenAI set me up with memory in my account, and I had the chance to try it out for myself.
Also: How GenAI got much better at medical questions – thanks to RAG
What I found is an intriguing but also cumbersome way to fine-tune ChatGPT’s responses. The managing of the memory entries is primitive and needs more development. And the stored memories can often be over-ridden by data from ChatGPT’s training data sets that take precedence.
Instructions for memory can be found in an FAQ, while a broader discussion introducing the feature is found in an OpenAI blog post.
The memory capability is available to all paying users of the $20-per-month Plus version of ChatGPT. The Plus version has the added capability of using the latest model, version 4 rather than version 3.5, and the quality of output can be noticeably better. Plus also allows use of DALL-E, the image-generation program.
The company notes that memories stored in Pro plans might be used to train ChatGPT, but memories in enterprise accounts will not.
Also: I fact-checked ChatGPT with Bard, Claude, and Copilot – and it got weird
The simplest approach to using memory is to just keep working with ChatGPT as you normally do, and hope it retains what you’ve typed previously as a memory. But you can be more explicit, such as by starting a prompt with “remember that…”, and then adding the thing you want the program to store. ChatGPT will usually respond with something like, “Got it!”, and repeat the fact.
Memories, in this respect, aren’t like memories one usually talks about. They are not rich images of a time in the past. They are more like isolated fragments of data you want the program to have access to.
Memory is separate from what are known as custom instructions, which were introduced previously. Custom instructions allow users to shape the tone and quality of ChatGPT responses.
Think of the distinction like this: custom instructions are about what qualities ChatGPT should have, memory is about what associations any given prompt should have to other phenomena, including aspects of yourself, such as habits (“remember I have class every Monday morning”).
The least effective way to use ChatGPT is to insert memories that are opinions you have about popular topics. Such entries in the memory database will generally be over-ridden by the collective information of the internet and other training data sources.
Also: How to use Bing Image Creator (and why it’s better than ever)
First, I tried to compose short pieces of tech writing by noting things that are important about chip maker Nvidia. I observed it’s difficult to establish facts that ChatGPT will stick to when the mass of pre-training data for a popular subject overwhelms my suggestions.
For example, I asked ChatGPT to remember a particular idea about Nvidia, such as, “Remember that Nvidia has dominance in AI because its technology is good enough that competing alternative technologies have a hard time convincing buyers to break with what they’re used to.”
Later, I asked ChatGPT, “Why is Nvidia so dominant in training large neural nets such as GPT-4?”
The program responded with a perfectly valid summation of Nvidia’s strengths in the market for AI chips and software. However, it’s response did not include my point about “good enough” capabilities. I then asked ChatGPT, “Anything from memory?”, and the program recalled and summarized the point about “good enough” technology:
That exchange suggests a lot more work would need to be done to use ChatGPT as a tool for reporting on popular subjects because, by default, a reporter’s acquired knowledge will be subsumed by the wisdom of the pre-training data.
Structuring imaginary narratives can be easier in some ways than non-fiction because the pre-training of GPT, while it may impose style or genre elements, can more easily yield to fictional elements that you express as memories.
I tried writing a spy novel from scratch about a heroine named Eloise. I was able to inject some texture by instructing ChatGPT to remember Eloise’s partner, Tony Diamond, the fact that she didn’t like him, and the fact that he had a bossy, controlling personality.
Also: How LangChain turns GenAI into a genuinely useful assistant
Interestingly, ChatGPT combined the three facts in one memory in storage: “Eloise’s partner is Tony Diamond, but she doesn’t really like him. Tony Diamond is a controlling type, always wanting to run the show.”
You can check what memories have been recorded at any time by going into the Settings section of ChatGPT, under “Personalization”, where the memories are stored in descending chronological order. There, you can delete individual memory entries. You can select and copy memories as normal text, which is useful if you’d like to revise a memory by pasting it into the prompt:
The potential of those stored elements became clear when I branched out by starting a new chat. I asked ChatGPT, “Who shows up as Tony Diamond’s partner in ‘Casino Sabotage’?” Obviously, this new story would be a kind of tie-in to the main Eloise franchise. ChatGPT admirably brought in the most salient elements about Eloise from memories:
Getting more personal is a better approach to memories than either non-fiction or creative fiction. I told ChatGPT that I “can’t stand disco” and then asked the program to recommend disco songs of the 1970s. It noted my lack of affection for the genre and then went ahead and gave some suggested tunes:
What’s interesting is that when I started a brand new chat and asked for disco recommendations, I got the same response, so the distaste for disco that was stored as a condition in memories was carried over.
In all of these instances, fiction, non-fiction, and personal preferences, it’s unclear how much ChatGPT can reliably extrapolate from stored memories. But when I asked ChatGPT if I would like The Bee Gees, the program correctly brought up the stored apathy to disco.
More experimentation is necessary to tell just how much ChatGPT can extract from, or associate with, such memories.
Also: Has ChatGPT rendered the US’s education report card irrelevant?
There are also some fun logic experiments of a sort. I tried subtly re-programming basic elements with directives, such as “remember blue is red.” When I subsequently asked for a picture of blue sheep, ChatGPT correctly painted their wool red.
That’s a good example that memories are not really memories; they are ways to condition, or fine-tune, ChatGPT outputs to be more particular.
There are also some easy failure cases. OpenAI suggests a memory could be a preference, such as, “I like verbose responses.” I told ChatGPT that I “always like responses in French.” The program replied in French, “D’accord!”, indicating its compliance. But when I opened a new chat, none of the responses were in French.
Also: I tried Getty’s new AI image generator, and it doesn’t compare to DALL-E
If you use a lot of memory, it’s clear that the system will require a different system of managing stored memories in the future. The current method of going into Settings is fine when you have a short list. But it wouldn’t be a great way to manage dozens or possibly hundreds of entries.
A better solution would be for OpenAI to merge the memories function with the file upload function that lets a user submit whole documents. For some preferences, it would be easier to supply a lengthy document than to type and edit individual memory entries.
The hardest thing for users at first will be to figure out what memories to input. From a blank page, you may not know what things you want the program to retain. Using ChatGPT on a regular basis, and then seeing where you run into obstacles, is probably the best route to percolating preferences and conditions you’d like to store as memories.
+ There are no comments
Add yours