This week, OpenAI announced on X (formerly known as Twitter) that ChatGPT can now reference past chats to provide more personalized responses. (Image courtesy of ChatGPTapp)
It leverages your preferences and interests to enhance its usefulness for writing, seeking advice, learning, and more.
Sam Altman, who co-founded OpenAI and has served as its CEO since 2019, noted to his three million followers on X that they have significantly improved ChatGPT's memory capabilities, allowing it to reference all of your previous conversations.
He emphasized that this feature is exciting because it points to the possibility of AI systems that develop a deep understanding of you over time, making them increasingly personalized and helpful.
The memory feature was rolled out this week for pro users and will soon be available for plus users. Users also have the option to opt out of memory or use temporary chat, which does not affect or utilize the memory feature.
ChatGPT users on social media say they have been exploring what many consider one of the most significant updates since the introduction of Gemini's infinite memory feature.
In contrast, Grok and OpenAI released a buggy update that failed to deliver on its promised functionalities.
When asked a general, non-specific question, ChatGPT performs a Top N search on past conversations and summarizes the relevant information.
However, for specific topics, it uses a Retrieval-Augmented Generation (RAG) process. Unfortunately, this process can yield inaccuracies, with the model sometimes confidently providing incorrect information or details about older topics.
Although ChatGPT claims to search by date range, if you inquire about a topic only found in your notes, it may attempt to search the web and frequently return no results, even when there are prior discussions on that topic.
This raises questions about the effectiveness of ChatGPT's memory system, especially since many users regularly delete their entire chat history.
Does this issue affect every model? Are different models aware of conversations held with different users?
Most users approach ChatGPT as a single-use search engine, preferring a clean slate with each query.
Relying on past conversations can sometimes introduce bias into responses, as the model might depend too heavily on its memory, even when the context is no longer relevant.