Sleepy Pickle: New Attack Targets Machine Learning Models

by | Jun 14, 2024 | News




Join our Patreon Channel and Gain access to 70+ Exclusive Walkthrough Videos.

Patreon
Reading Time: 3 Minutes

New Attack Vector: Sleepy Pickle Targets Machine Learning Models

The security risks associated with the Pickle format have resurfaced with the discovery of a new hybrid machine learning (ML) model exploitation technique named Sleepy Pickle. According to Trail of Bits, this sophisticated attack method weaponizes the commonly used format for packaging and distributing ML models, posing a significant supply chain risk to an organization’s downstream customers.

“Sleepy Pickle is a stealthy and novel attack technique that targets the ML model itself rather than the underlying system,” explained Boyan Milanov, a security researcher at Trail of Bits.

Pickle, a widely-used serialization format by ML libraries such as PyTorch, has inherent vulnerabilities that can be exploited to execute arbitrary code during deserialization simply by loading a pickle file.

In its documentation, Hugging Face advises users to load models only from trusted sources, rely on signed commits, and consider using TensorFlow or Jax formats with the from_tf=True auto-conversion mechanism to mitigate risks.

See Also: So, you want to be a hacker?
Offensive Security, Bug Bounty Courses




Discover your weakest link. Be proactive, not reactive. Cybercriminals need just one flaw to strike.

Sleepy Pickle technique

The Sleepy Pickle technique involves embedding a malicious payload into a pickle file using open-source tools like Fickling. The payload can then be delivered to a target through various methods, including adversary-in-the-middle (AitM) attacks, phishing, supply chain compromises, or exploiting system weaknesses.

Machine Learning

“When the file is deserialized on the victim’s system, the payload executes and modifies the model in-place, potentially inserting backdoors, controlling outputs, or tampering with processed data before returning it to the user,” Milanov stated.

In practical terms, the injected payload can alter the model’s behavior by tampering with model weights or modifying the input and output data. This could lead to generating harmful outputs or misinformation, such as false medical advice, stealing user data under specific conditions, or indirectly attacking users by manipulating summaries of news articles with links to phishing pages.

Trail of Bits highlighted that Sleepy Pickle enables threat actors to maintain stealthy access to ML systems, making detection challenging since the model is compromised upon loading the pickle file within the Python process.




​This method proves more effective than directly uploading a malicious model to platforms like Hugging Face, as it dynamically modifies model behavior or output without requiring targets to download and run them.

“With Sleepy Pickle, attackers can create pickle files that are not ML models but can still corrupt local models if loaded together,” Milanov added. “The attack surface is thus much broader, as control over any pickle file in the target organization’s supply chain is sufficient to attack their model

Are u a security researcher? Or a company that writes articles about Cyber Security, Offensive Security (related to information security in general) that match with our specific audience and is worth sharing? If you want to express your idea in an article contact us here for a quote: [email protected]

Source: thehackernews.com

Source Link

Merch

Recent News

EXPLORE OUR STORE

Offensive Security & Ethical Hacking Course

Begin the learning curve of hacking now!


Information Security Solutions

Find out how Pentesting Services can help you.


Join our Community

Share This