Critical Zero-Days in PyTorch Scanner 'PickleScan' Create AI Supply Chain Risk

JFrog Discloses Critical Zero-Day Vulnerabilities in PyTorch Security Tool PickleScan, Enabling Arbitrary Code Execution

CRITICAL
December 4, 2025
5m read
VulnerabilitySupply Chain AttackCloud Security

Related Entities

Organizations

Products & Tech

PickleScanPyTorch Python

Full Report

Executive Summary

On December 3, 2025, the JFrog security research team revealed the discovery of three critical zero-day vulnerabilities in PickleScan, a widely adopted open-source tool for detecting malicious Python pickle files. These vulnerabilities carry a CVSS score of 9.3 (Critical) and introduce a severe software supply chain risk for the Artificial Intelligence and Machine Learning (AI/ML) ecosystem. An attacker can exploit these flaws to create a malicious AI model, often used with the PyTorch framework, that PickleScan will incorrectly flag as safe. When an unsuspecting developer or organization loads this trojanized model, it can trigger arbitrary code execution on their system. This attack vector allows for the covert distribution of malware through public model repositories, bypassing a key security control in the MLOps pipeline.


Vulnerability Details

The vulnerabilities lie in the logic of PickleScan itself. The tool is designed to statically analyze a pickle file—a common format for serializing Python objects, heavily used for saving and loading AI models—to identify dangerous opcodes that could lead to code execution. The flaws discovered by JFrog represent bypass techniques, where a specially crafted pickle file can be constructed to appear benign to PickleScan's scanner while still containing a malicious payload that is executed upon deserialization by a standard Python pickle loader.

This creates a dangerous gap in the security supply chain: an organization may believe it is safely handling untrusted models by scanning them with PickleScan, but in reality, it remains vulnerable to exploitation. The end result is arbitrary code execution on the machine that loads the model, which could be a developer's workstation, a training server, or a production inference server.

Affected Systems

  • Product: PickleScan (all versions prior to a potential patch)
  • Ecosystem: Any individual or organization that uses PickleScan to vet untrusted pickle files or PyTorch models (.pt files).

This is not a vulnerability in PyTorch itself, but in a security tool designed to protect its users. However, the vast popularity of PyTorch makes the impact of a faulty scanner particularly widespread.

Exploitation Status

These are zero-day vulnerabilities, meaning they were not publicly known before JFrog's disclosure and no patches were available at the time of announcement. While there is no public evidence of active exploitation in the wild, the disclosure of the technical details means that threat actors could quickly weaponize these bypass techniques. The risk is especially high for organizations that automatically pull and deploy models from public repositories like Hugging Face.

Impact Assessment

This vulnerability represents a critical threat to the security of the AI/ML software supply chain. A successful exploit could lead to:

  • Compromise of Development Environments: Attackers could gain control of researcher or developer machines, stealing proprietary code, data, or credentials.
  • Production Server Takeover: If a malicious model is deployed to production, the attacker could compromise the inference servers, potentially stealing sensitive input data, manipulating model outputs, or using the servers as a pivot point into the broader corporate network.
  • AI Model Poisoning or Backdooring: An attacker could use the code execution vulnerability to subtly alter the behavior of the model itself, creating a backdoor that is triggered by specific inputs.

This undermines the trust in shared AI models and highlights the immaturity of security tooling in the rapidly evolving MLOps space.

Detection Methods

  • Static Analysis Limitations: The core issue is that static analysis tools like PickleScan can be bypassed. Relying solely on them for security is insufficient.
  • Dynamic Analysis (Sandboxing): The most effective way to detect a malicious model is to load it in a heavily sandboxed and monitored environment. Observe the model's behavior during loading and inference for suspicious activities like network connections, file system access, or process creation. This is an application of D3FEND's D3-DA: Dynamic Analysis.
  • Model Provenance: Whenever possible, only use models from trusted, verified sources. Check for digital signatures or other attestations of a model's origin.

Remediation Steps

  1. Assume Untrusted Models are Malicious: Until a patched version of PickleScan or a more robust alternative is available, organizations should treat all AI models from untrusted sources as potentially malicious.
  2. Use Sandboxing: Do not load or deserialize untrusted pickle files on production systems or sensitive developer workstations. Use isolated, ephemeral environments (e.g., containers with no network access and read-only file systems) for initial model inspection.
  3. Seek Alternatives: Explore alternative model formats that have a safer deserialization process, such as safetensors.
  4. Monitor for Updates: Keep a close watch on the PickleScan project repository for any patches or mitigation guidance from the maintainers.

Timeline of Events

1
December 3, 2025
JFrog publicly discloses the three zero-day vulnerabilities in PickleScan.
2
December 4, 2025
This article was published

MITRE ATT&CK Mitigations

Loading untrusted AI models in a sandboxed, isolated environment is the most effective way to contain potential code execution.

Be highly selective about the sources of AI models, preferring official, signed models from trusted repositories over unverified ones.

D3FEND Defensive Countermeasures

Since static analysis with PickleScan is proven to be unreliable, organizations must shift to dynamic analysis for untrusted models. Before a model is used, it should be loaded in a secure, isolated sandbox (e.g., a minimal Docker container with gVisor or a dedicated VM). This environment should have networking disabled and strict file system permissions. Monitor the deserialization process for any suspicious system calls, file I/O, or process creation attempts. If the model loading process triggers any behavior beyond expected memory allocation and computation, it should be flagged as malicious and rejected. This approach moves from trusting a scanner's verdict to a 'distrust and verify' model for AI supply chain security.

A key strategic mitigation is to reduce reliance on the inherently unsafe pickle format. Development and MLOps teams should prioritize migrating to safer model serialization formats like safetensors. This format is designed specifically to prevent arbitrary code execution during loading. Mandate the use of safetensors in your organization's MLOps policies for all new models. For existing models, create a plan to convert them. While this is a longer-term effort, it addresses the root cause of the problem rather than just trying to detect malicious pickles. It hardens the application stack against an entire class of vulnerability.

Sources & References

3 Zero Day Vulnerabilities Found in PickleScan - Australian Cyber Security Magazine
Australian Cyber Security Magazine (mysecuritymedia.com) December 3, 2025

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

Zero-DayAI SecurityMLOpsPyTorchPickleScanJFrogSupply Chain AttackRCE

📢 Share This Article

Help others stay informed about cybersecurity threats

Continue Reading