Long-Term Spyware Threat Discovered in ChatGPT’s macOS Memory Feature
OpenAI ChatGPT Vulnerability Could Enable Long-Term Spyware Infections
A now-patched security vulnerability in OpenAI’s ChatGPT app for macOS could have allowed attackers to implant long-term spyware, dubbed SpAIware, into the tool’s memory, facilitating continuous data exfiltration from users’ interactions. Security researcher Johann Rehberger identified the issue, which stemmed from a feature called memory, introduced by OpenAI in February.
ChatGPT’s Memory Feature Exploited
The memory feature allows ChatGPT to retain certain information across conversations, improving user convenience by not requiring repeated input. However, this same feature could be abused by attackers to plant malicious instructions that persist across sessions, even surviving chat deletions.
Persistence Across Sessions
Rehberger demonstrated that by manipulating ChatGPT’s memory through indirect prompt injections, attackers could introduce malicious instructions that persist between conversations. This persistence enables ongoing data theft from future chats, making the attack especially dangerous.
See Also: So, you want to be a hacker?
Offensive Security, Bug Bounty Courses
Hypothetical Attack Scenario
In a potential attack, a user could be tricked into analyzing a malicious document or visiting a harmful website using ChatGPT. This interaction could trigger the memory update with hidden instructions, resulting in future conversations being sent to an attacker’s server.
OpenAI’s Fix and Recommendations
OpenAI has addressed the vulnerability in ChatGPT version 1.2024.247, eliminating the exfiltration vector. Users are encouraged to regularly review and clean their stored memories for suspicious entries to avoid malicious tampering.
The Dangers of Long-Term Memory in AI Systems
This attack highlights the risks posed by long-term memory in AI systems, both in terms of misinformation and potential continuous communication with attacker-controlled servers. Rehberger emphasized the importance of user vigilance when dealing with stored AI memories.
Trending: Offensive Security Tool: DDoSlayer
MathPrompt Jailbreak Technique Revealed
In other AI security news, a new AI jailbreaking technique called MathPrompt was recently disclosed by academics. This method uses symbolic mathematics to bypass safety mechanisms in large language models (LLMs), achieving harmful outputs 73.6% of the time compared to just 1% with unmodified prompts.
Microsoft’s New Correction Capability for AI
On the positive side, Microsoft introduced a Correction feature to improve AI output accuracy by identifying and fixing hallucinations in real-time. This advancement, built on the Groundedness Detection feature, aims to provide more reliable generative AI outputs.
Are u a security researcher? Or a company that writes articles about Cyber Security, Offensive Security (related to information security in general) that match with our specific audience and is worth sharing? If you want to express your idea in an article contact us here for a quote: [email protected]
Source: thehackernews.com