CVE-2026-22807
Chainguard vulnerability analysis and mitigation

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face auto_map dynamic modules during model resolution without gating on trust_remote_code, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.


SourceNVD

Related Chainguard vulnerabilities:

CVE ID

Severity

Score

Technologies

Component name

CISA KEV exploit

Has fix

Published date

CVE-2026-41242CRITICAL9.4
  • JavaScriptJavaScript
  • librechat
NoYesApr 18, 2026
CVE-2026-40260MEDIUM6.9
  • PythonPython
  • litellm
NoYesApr 17, 2026
CVE-2026-40293MEDIUM6.5
  • WolfiWolfi
  • openfga
NoYesApr 17, 2026
CVE-2026-40347MEDIUM5.3
  • PythonPython
  • vllm-openai-cuda-12.9
NoYesApr 18, 2026
CVE-2026-6491MEDIUM4.8
  • WolfiWolfi
  • libvips
NoNoApr 17, 2026

Free Vulnerability Assessment

Benchmark your Cloud Security Posture

Evaluate your cloud security practices across 9 security domains to benchmark your risk level and identify gaps in your defenses.

Request assessment

Get a personalized demo

Ready to see Wiz in action?

"Best User Experience I have ever seen, provides full visibility to cloud workloads."
David EstlickCISO
"Wiz provides a single pane of glass to see what is going on in our cloud environments."
Adam FletcherChief Security Officer
"We know that if Wiz identifies something as critical, it actually is."
Greg PoniatowskiHead of Threat and Vulnerability Management