CVE-2025-62372
Chainguard vulnerability analysis and mitigation

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.


SourceNVD

Related Chainguard vulnerabilities:

CVE ID

Severity

Score

Technologies

Component name

CISA KEV exploit

Has fix

Published date

CVE-2025-66471HIGH8.9
  • PythonPython
  • py3-urllib3
NoYesDec 05, 2025
CVE-2025-66418HIGH8.9
  • PythonPython
  • python-urllib3
NoYesDec 05, 2025
CVE-2025-66506HIGH7.5
  • PodmanPodman
  • golang-github-sigstore-fulcio
NoYesDec 04, 2025
CVE-2025-66490MEDIUM6.9
  • WolfiWolfi
  • traefik
NoYesDec 09, 2025
CVE-2025-66491MEDIUM5.9
  • WolfiWolfi
  • github.com/traefik/traefik/v3
NoYesDec 09, 2025

Free Vulnerability Assessment

Benchmark your Cloud Security Posture

Evaluate your cloud security practices across 9 security domains to benchmark your risk level and identify gaps in your defenses.

Request assessment

Get a personalized demo

Ready to see Wiz in action?

"Best User Experience I have ever seen, provides full visibility to cloud workloads."
David EstlickCISO
"Wiz provides a single pane of glass to see what is going on in our cloud environments."
Adam FletcherChief Security Officer
"We know that if Wiz identifies something as critical, it actually is."
Greg PoniatowskiHead of Threat and Vulnerability Management