xAI's Grok Gains Memory Feature as OpenAI Enhances Safety for Advanced AI Models
By
siliconindia | Thursday, 17 April 2025, 02:09 Hrs
Elon Musk's AI startup, xAI, is continuing its rapid development of the Grok chatbot, inching it closer to competitors like OpenAI’s ChatGPT and Google’s Gemini. On Wednesday evening, the company rolled out a new 'memory' feature for Grok, allowing the chatbot to remember user preferences and tailor responses based on past interactions.
The feature, which is now available in beta on Grok.com and the Grok mobile apps for iOS and Android, enables Grok to offer more personalized recommendations and responses. According to xAI, the system is designed with transparency in mind. 'Memories are transparent', the company said in a post from the official Grok account on X (formerly Twitter). “You can see exactly what Grok knows and choose what to forget”.
Users can manage or delete individual memories via an icon within the chat interface on the web, with Android support coming soon. The memory feature can also be entirely disabled in the Data Controls section of the settings menu. However, the new capability is not currently available to users in the EU or the U.K., likely due to regulatory concerns around data privacy.
The addition puts Grok in closer competition with ChatGPT and Gemini, both of which have long utilized persistent memory to enhance user personalization. OpenAI recently upgraded ChatGPT’s memory system to reference an entire user’s chat history, further deepening contextual understanding.
Meanwhile, OpenAI is ramping up its own safety mechanisms as it continues to release more powerful AI models. In its latest safety report, the company outlined new measures aimed at curbing misuse of its o3 and o4-mini models newer iterations that reportedly show significantly increased reasoning capabilities.
To address potential biosecurity threats, OpenAI has introduced a specialized 'safety-focused reasoning monitor' built on top of o3 and o4-mini. The monitor is designed to detect prompts related to biological or chemical weapons and prevent the models from providing harmful instructions.
The system was trained using 1,000 hours of red-teaming efforts, where experts flagged risky conversations involving the models. In controlled tests simulating the blocking system, the AI declined to respond to unsafe prompts 98.7% of the time. However, OpenAI admits the testing didn’t simulate repeated attempts by users to bypass the blocks, underscoring the continued need for human oversight.
OpenAI says o3 and o4-mini have shown stronger performance in areas that could raise concerns such as questions related to developing biological threats compared to their predecessors, including o1 and GPT-4. Although the newer models don’t meet the company’s threshold for 'high-risk' biorisks, OpenAI has added extra layers of monitoring to address their enhanced capabilities.
These safeguards are part of OpenAI’s broader Preparedness Framework, which outlines how the company evaluates and mitigates emerging risks from advanced AI systems. Similar reasoning monitors are also being used to prevent GPT-4o’s image generation capabilities from producing illegal or harmful content, such as child sexual abuse material (CSAM).
Despite these efforts, some experts argue OpenAI could be doing more. Metr, a red-teaming partner, criticized the limited testing time available for evaluating o3’s potential for deceptive behavior. Additionally, OpenAI has faced criticism for its decision not to release a safety report for its newly launched GPT-4.1 model, raising questions about transparency.
As AI models become more powerful and widely used, the arms race between innovation and safety is intensifying. With xAI’s Grok advancing in personalization and OpenAI doubling down on risk mitigation, the coming months are likely to see further developments and debate over the responsible use of artificial intelligence.
