What is AI threat intelligence?
AI threat intelligence is the practice of understanding, tracking, and operationalizing threats that target AI systems – along with using advanced analytics to scale how that intelligence is produced and applied. At its core, it focuses on how attackers abuse, compromise, or exploit AI models, data pipelines, and the cloud infrastructure that supports them.
This distinguishes AI threat intelligence from adjacent disciplines like threat detection or SOC automation. While detection focuses on identifying suspicious activity as it occurs, threat intelligence is concerned with patterns, techniques, and trends – how threats evolve over time, which systems they target, and what conditions make those attacks viable in real environments.
AI systems require this distinction because they introduce assets and trust assumptions that don’t exist in traditional applications. Models, training data, inference endpoints, and GPU-backed workloads become attack surfaces of their own. These components are often highly automated, rely on non-human identities, and interact with sensitive data in ways that make traditional threat intelligence feeds insufficient on their own.
At the same time, AI also plays a role in how threat intelligence is produced. Machine learning and automation help collect, normalize, and analyze large volumes of telemetry across cloud environments. In this article, however, the focus is first and foremost on threat intelligence for AI systems – how to understand the threats targeting AI infrastructure – before exploring how AI-powered analytics help scale that work.
AI Security Readiness: Insights from 100 Cloud Architects, Engineers, and Security Leaders
Wiz and Gatepoint Research surveyed 100 cloud and security professionals across several roles on the state of AI security in their organizations.

Why AI systems require dedicated threat intelligence
AI systems change how risk shows up in cloud environments. The issue isn’t just that AI introduces entirely new classes of attackers, but that it also reshapes the assets, trust boundaries, and assumptions security teams rely on when assessing threats.
In AI environments, models, training pipelines, inference endpoints, and supporting infrastructure become long-lived, high-value targets. These components often operate continuously, rely on non-human identities, and interact directly with sensitive data. As a result, threats that might have been low-impact or short-lived in traditional applications can persist and scale when applied to AI systems.
Threat intelligence becomes especially important because many AI failures are not the result of a single exploit. Instead, they emerge from combinations of conditions: exposed services, overly permissive identities, unvetted dependencies, or insecure data access. Without visibility into how these pieces connect, security teams may understand individual risks but miss how they come together in practice.
Most feeds and frameworks focus on malware families, phishing campaigns, or endpoint compromise, offering limited insight into how attackers target AI infrastructure, model artifacts, or training data. As organizations deploy AI more broadly, this gap becomes harder to ignore.
Dedicated AI threat intelligence fills that gap by focusing on where AI systems are exposed, how they can be abused, and which attack techniques are relevant in cloud-based AI environments. It helps teams move beyond generic indicators and toward a clearer understanding of how AI systems are actually attacked in the wild.
Real-world AI threat intelligence from Wiz Research
AI threat intelligence is most effective when it reflects how attackers actually operate in production environments. Wiz Research focuses on uncovering these patterns by analyzing cloud-native AI infrastructure, identity usage, software dependencies, and misconfigurations observed across real customer environments.
Rather than treating AI threats as novel or speculative, recent research shows that many AI-related risks stem from familiar cloud security failure modes – applied to systems that are highly automated, data-rich, and often broadly permissioned.
Exposure of AI infrastructure and data
One of the most consistent findings across Wiz Research is the prevalence of exposed AI infrastructure. Training datasets, model artifacts, inference logs, and supporting databases are frequently deployed in cloud environments with overly permissive network access or missing authentication controls.
Several investigations uncovered publicly accessible AI-related data stores containing sensitive information, including proprietary datasets and credentials. Examples include:
Wiz Research uncovers exposed DeepSeek database leak
https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leakForbes AI 50 companies leaking secrets in public code
https://www.wiz.io/blog/forbes-ai-50-leaking-secrets
For example, when scanning the Forbes AI 50 companies (except a few without a GitHub presence) we got a surprising result: About two-thirds of the AI companies analyzed had a verified secrets leak.
These incidents demonstrate that attackers often do not need to exploit models directly; instead, they capitalize on exposed storage, misconfigured services, or forgotten development environments surrounding AI systems.
From a threat intelligence perspective, this reinforces a critical insight: AI systems expand the attack surface primarily through their infrastructure dependencies, not through the models themselves.
Vulnerabilities in AI runtimes and inference infrastructure
AI workloads commonly rely on containerized runtimes, GPU-backed services, and inference servers that operate at elevated privilege levels. Wiz Research has identified vulnerabilities in these components that mirror traditional cloud security risks – but with amplified impact due to shared infrastructure and automation.
A notable example is CVE-2025-23266 (NVIDIAScape), a critical container escape vulnerability discovered by Wiz affecting tooling that underpins many AI services offered by cloud and SaaS providers. The vulnerability allows a malicious container to bypass isolation controls and gain root access to the host system, significantly expanding blast radius in environments running AI workloads.
Related research includes:
NVIDIA Triton inference server vulnerability chain (CVE-2025-23319)
https://www.wiz.io/blog/nvidia-triton-cve-2025-23319-vuln-chain-to-ai-serverNVIDIA AI vulnerability deep dive (CVE-2024-0132)
https://www.wiz.io/blog/nvidia-ai-vulnerability-deep-dive-cve-2024-0132
These findings highlight why AI threat intelligence must track vulnerabilities in AI infrastructure components alongside model-level concerns.
Leaked secrets and non-human identity abuse
AI pipelines depend heavily on automation and non-human identities, including service accounts, API tokens, and OAuth integrations. Wiz Research has repeatedly observed exposed credentials embedded in public repositories or misconfigured cloud environments – many of them tied directly to AI services and model access.
Examples include:
Leaking AI secrets in public code
https://www.wiz.io/blog/leaking-ai-secrets-in-public-code
In these cases, attackers do not need to compromise AI systems directly. Abusing leaked secrets or over-privileged service accounts grants the same level of access as trusted AI workflows, enabling data access, model manipulation, or infrastructure changes.
This aligns closely with established cloud identity attack patterns, but the impact is magnified in AI environments due to continuous execution and broad access requirements.
AI supply chain abuse and delegated trust
AI systems increasingly rely on external dependencies, including open-source packages, pretrained models, developer tools, and third-party APIs. When these components are invoked automatically at runtime, trust decisions that were once reviewed by humans become embedded into execution paths.
The s1ngularity supply chain attack illustrates this shift. Attackers compromised an npm publishing token for widely used Nx packages and distributed malicious versions that leveraged AI command-line tools such as Claude, Q, and Gemini. These tools were used to search for and extract sensitive credentials using LLM-assisted prompts, accelerating reconnaissance once trust boundaries were breached.
Coverage: The s1ngularity supply chain attack
https://www.bleepingcomputer.com/news/security/s1ngularity-npm-supply-chain-attack/
This incident underscores a growing AI supply chain risk: automation and AI-enabled tooling can dramatically increase the speed and scale of compromise once dependencies are poisoned.
Emerging risk patterns in AI-assisted development
Wiz Research has examined security risks emerging from AI-assisted development practices, often referred to as “vibe coding,” where developers rely heavily on AI tools to generate application logic with minimal manual review. In an analysis of real-world applications built using these workflows, Wiz found that approximately 20% of vibe-coded apps contained serious security issues, most commonly related to authentication and authorization logic.
Rather than introducing novel exploit techniques, these applications tended to repeat the same failure modes at scale – such as missing access controls, client-side–only authentication checks, or inconsistent identity enforcement. Because AI-generated code is often reused across projects, these weaknesses can propagate quickly across multiple applications.
What these findings mean for AI threat intelligence
Taken together, these findings ground AI threat intelligence in observed behavior rather than theoretical misuse. They show that AI-related risk is rarely isolated to models alone. Instead, it emerges at the intersection of infrastructure exposure, identity misuse, software supply chain trust, and automation.
Effective AI threat intelligence therefore depends on understanding where AI systems run, how they are connected, and which failures attackers are most likely to exploit—not on predicting model behavior or adversarial prompts in isolation.
Threat categories targeting AI systems
Threats targeting AI systems tend to cluster around a small number of recurring patterns. While the techniques themselves are often familiar from broader cloud security incidents, they manifest differently when applied to models, training pipelines, and AI infrastructure.
Understanding these categories helps threat intelligence teams focus on how AI systems are actually attacked in practice, rather than treating AI as a purely theoretical risk.
Attacks against AI infrastructure
Many AI environments rely on complex cloud infrastructure, including managed AI services, GPU-backed compute, inference servers, and orchestration layers. When these components are exposed or misconfigured, they become attractive entry points for attackers.
Threat intelligence in this category focuses on vulnerabilities in AI runtimes, inference servers, and supporting services, as well as cloud misconfigurations that expose AI workloads to untrusted networks. Research has repeatedly shown that attackers target AI infrastructure not because it is unique, but because it is often powerful, expensive to run, and insufficiently hardened.
Model and data compromise
AI models and their training data represent high-value assets. Threats in this category include unauthorized access to model artifacts, exposure of sensitive training datasets, and opportunities to influence or tamper with data used during training or retraining.
Rather than assuming widespread, automated model poisoning, effective threat intelligence looks for the conditions that make compromise possible – such as overly permissive access to data stores, insecure model registries, or exposed training environments. These failures mirror traditional data security issues, but the downstream impact on AI systems can be harder to detect.
Identity and access abuse in AI environments
AI systems depend heavily on non-human identities. Service accounts, roles, and tokens are commonly used to automate training, deployment, and inference workflows. When these identities are over-privileged or poorly managed, they become a primary attack vector.
Threat intelligence in this area tracks how identity abuse techniques – such as token leakage, OAuth misconfigurations, or credential reuse – apply to AI workloads. The risk is not new, but the impact is amplified by the continuous and autonomous nature of AI systems.
AI supply chain threats
AI development frequently involves third-party components, including pretrained models, open-source frameworks, and external APIs. While these dependencies accelerate development, they also expand the attack surface.
Supply chain–focused threat intelligence examines how compromised models, malicious libraries, or insecure integrations can propagate through AI pipelines. In AI environments, these risks are often harder to spot because dependencies are consumed programmatically and deployed quickly, leaving little opportunity for manual review.
Detect active cloud threats
Learn how Wiz Defend detects active threats using runtime signals and cloud context—so you can respond faster and with precision.

From research to reality: how AI threat intelligence becomes actionable
Threat intelligence only delivers value when it can be applied to real environments. Research into AI-related attacks may identify techniques, vulnerabilities, or emerging patterns, but without context, that information is difficult for security teams to act on.
The challenge is that AI threats rarely map cleanly to a single indicator or control failure. A vulnerability in an inference server, an exposed training dataset, or an over-permissioned service account may each appear manageable in isolation. It’s only when these conditions are connected that real risk emerges.
Making AI threat intelligence actionable requires translating research findings into questions security teams can answer in their own environments.
Which AI services are exposed?
Which identities can access sensitive training data?
Where do model artifacts intersect with external dependencies or untrusted networks?
These are infrastructure and access questions, not abstract AI concerns. This is where context becomes critical. AI threat intelligence links observed attack techniques to the cloud resources, identities, and data paths that make exploitation possible. Instead of treating intelligence as a static feed of indicators, it becomes a way to prioritize remediation based on how threats would realistically unfold in a given environment.
By grounding research in operational context, security teams can move from awareness to action – focusing their efforts on the AI systems and attack paths that matter most, rather than chasing theoretical risks or generic alerts.
The role of AI-powered analytics in threat intelligence operations
While AI threat intelligence focuses on understanding threats that target AI systems, advanced analytics play an important supporting role in how that intelligence is produced and applied at cloud scale.
Modern cloud and AI environments generate enormous volumes of telemetry. Logs, configuration data, access events, and network signals change continuously as models are trained, deployed, and updated. AI-powered analytics help threat intelligence teams process this data efficiently – collecting signals from disparate sources, normalizing them, and identifying patterns that would be difficult to spot manually.
Used correctly, these techniques accelerate threat intelligence workflows rather than replace them. Machine learning can help surface correlations, reduce noise, and highlight anomalies, but interpretation still depends on human judgment and domain knowledge. This is especially important for AI systems, where distinguishing between expected automation and genuine abuse requires contextual understanding.
AI-powered analytics are most effective when applied to scale and prioritization problems. They help teams track emerging attack techniques, identify recurring misconfigurations across environments, and focus investigative effort where it is most likely to matter. They do not eliminate the need for research-driven intelligence or cloud context; they make those inputs usable across complex, fast-moving environments.
Framed this way, AI becomes an enabler of threat intelligence – not its definition. The goal remains the same: understand how attackers operate, which AI systems are exposed, and where defenses should be strengthened before incidents occur.
How to Prepare for a Cloud Cyberattack: An Actionable Incident Response Plan Template
A quickstart guide to creating a robust incident response plan - designed specifically for companies with cloud-based deployments.

How Wiz operationalizes AI threat intelligence
Wiz operationalizes AI threat intelligence by grounding research-driven insights in real cloud environments. Rather than treating AI threats as abstract concepts or relying solely on indicators, Wiz focuses on the concrete conditions that determine whether an AI-related threat can actually be exploited.
At the foundation of this approach is the Wiz Security Graph, which continuously maps cloud resources and their relationships – including identities, permissions, network exposure, and data access. AI systems are treated as first-class cloud assets within this model, covering managed AI services, notebooks, training pipelines, model storage, inference endpoints, and the infrastructure they depend on.
Wiz Research plays a critical role by identifying real-world attacker behavior affecting AI environments. This includes exposed AI data stores, leaked model secrets, misused non-human identities, and vulnerabilities in AI infrastructure. These findings inform detection logic and risk modeling, ensuring that AI threat intelligence reflects observed cloud failure modes rather than theoretical abuse scenarios.
Wiz AI Security Posture Management (AI-SPM) connects this intelligence to operational risk by correlating AI-specific issues with cloud context. Instead of flagging threats in isolation, Wiz helps teams understand why a particular AI threat matters in their environment – for example, when an exposed AI service runs under an over-privileged identity with access to sensitive data.
By mapping AI threats to actual cloud assets, identities, and data paths, Wiz enables security teams to prioritize remediation based on realistic attack paths and business impact. This approach shifts AI threat intelligence from passive awareness to actionable insight, without requiring teams to interpret model internals or predict attacker intent.
Detect active cloud threats
Learn how Wiz Defend detects active threats using runtime signals and cloud context—so you can respond faster and with precision.
