Critical Remote Code Execution Vulnerabilities in AI/ML Libraries: NeMo, Uni2TS, FlexTok (2026)

AI Libraries Under Fire: Uncovering Remote Code Execution Vulnerabilities

Imagine a world where loading a seemingly harmless AI model could grant attackers full control over your system. Sounds like a sci-fi nightmare, right? But here's where it gets controversial: this isn't fiction. We've discovered critical vulnerabilities in popular AI/ML libraries from tech giants like Apple, Salesforce, and NVIDIA, allowing remote code execution (RCE) through malicious model metadata.

The Culprits: Popular Libraries with a Dark Secret

These vulnerabilities lurk within three widely-used open-source Python libraries:

  • NeMo (NVIDIA): A powerful framework for building diverse AI models, boasting over 700 models on HuggingFace, including the popular Parakeet.
  • Uni2TS (Salesforce): A library powering Salesforce's Morai, a time series analysis model with hundreds of thousands of downloads.
  • FlexTok (Apple & EPFL VILAB): A framework enabling image processing in AI models, primarily used by EPFL VILAB's models.

The Vulnerability: Metadata Becomes a Weapon

The issue stems from how these libraries handle model metadata. They use a third-party tool called Hydra to instantiate classes based on this metadata. Vulnerable versions blindly execute the provided data as code, allowing attackers to embed malicious code within the metadata. When a compromised model is loaded, the code executes, granting the attacker control.

And this is the part most people miss: Even though newer model formats like safetensors aim to be secure, these libraries introduce vulnerabilities through their handling of metadata and configuration data.

The Fix: Patches and Awareness

The good news? All affected vendors have been notified and have released patches:

  • NVIDIA: Released a fix in NeMo 2.3.2 (CVE-2025-23304).
  • Salesforce: Deployed a fix on July 31, 2025 (CVE-2026-22584).
  • Apple & EPFL VILAB: Updated ml-flextok with YAML parsing and an allowlist for safer instantiation.

The Bigger Picture: A Call for Vigilance

While no malicious exploits have been detected yet, the potential for harm is real. Attackers could easily modify popular models, adding malicious metadata and distributing them as seemingly legitimate updates. This highlights the need for:

  • Strict model vetting: Only load models from trusted sources.
  • Robust security practices: Implement code reviews and vulnerability scanning for AI/ML pipelines.
  • Continued research: The AI security landscape is constantly evolving, requiring ongoing vigilance.

A Thought-Provoking Question: As AI becomes increasingly integrated into our lives, how can we ensure the security and trustworthiness of these powerful tools? Should there be stricter regulations or industry standards for AI model development and deployment? Let's spark a conversation in the comments below!

Critical Remote Code Execution Vulnerabilities in AI/ML Libraries: NeMo, Uni2TS, FlexTok (2026)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Nathanael Baumbach

Last Updated:

Views: 6234

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Nathanael Baumbach

Birthday: 1998-12-02

Address: Apt. 829 751 Glover View, West Orlando, IN 22436

Phone: +901025288581

Job: Internal IT Coordinator

Hobby: Gunsmithing, Motor sports, Flying, Skiing, Hooping, Lego building, Ice skating

Introduction: My name is Nathanael Baumbach, I am a fantastic, nice, victorious, brave, healthy, cute, glorious person who loves writing and wants to share my knowledge and understanding with you.