AI Cyberattacks: How attackers target AI, and use AI against you

Wiz Experts Team
Main takeaways about AI cyberattacks:
  • AI is expanding the attack surface across models, data, pipelines, SaaS agents, and cloud infrastructure. Attackers exploit these new edges and use AI to enhance traditional techniques.

  • Real-world threats now include AI-invoking malware, misconfigured AI services, leaked AI secrets, compromised model APIs, poisoned datasets, and insecure AI-generated code.

  • Cloud environments face the highest AI risk due to self-hosted models, distributed training data, shadow AI tooling, and complex identity paths across managed AI services.

  • Defending against AI cyberattacks requires mapping your AI attack surface, securing AI pipelines end-to-end, monitoring model behavior and data access, and applying cloud context to AI risks.

  • Wiz provides unified AI Security Posture Management (AI-SPM) through Wiz Code and the Wiz Security Graph – helping organizations discover AI assets, detect misconfigurations, secure agents and APIs, and stop AI-driven attacks across cloud environments.

What are AI cyberattacks?

AI cyberattacks are threats that either target AI systems – models, pipelines, agents, APIs, and the sensitive data behind them – or use AI to enhance or automate traditional attack techniques.

These attacks differ from traditional cyber threats in scale and autonomy. Attackers can now automate reconnaissance, generate exploits, bypass safety guardrails, manipulate AI agents, or poison training data across distributed cloud environments. Wiz Research has highlighted this shift across multiple investigations, including its AI attack surface mapping and its analysis of insecure vibe-generated app code.

Wiz has also demonstrated how the AI ecosystem introduces new patterns of exposure, including:

These risks stack on top of existing cloud vulnerabilities, creating a more interconnected and harder-to-map attack surface.

AI cyberattacks therefore fall into three broad buckets:

  1. Attacks that target AI systems directly (e.g., model tampering, agent manipulation, data poisoning, GPU runtime exploits).

  2. Attacks that use AI to supercharge traditional techniques (e.g., polymorphic malware, exploit generation, spear phishing).

  3. Risks created accidentally through AI adoption – misconfigurations, insecure AI-generated code, unmanaged agents, and shadow AI services.

Put simply: AI hasn’t replaced existing cyberattacks. It has multiplied them – and accelerated them.

How AI-powered attacks work

Attackers are no longer just using AI – they’re weaving it into every stage of the kill chain. AI gives adversaries speed, scale, stealth, and automation that weren’t possible with manual tooling.

1.AI accelerates reconnaissance and vulnerability discovery

Modern attackers use LLMs and code-generation models to accelerate reconnaissance, analyze cloud architectures, and discover misconfigurations or exploitable paths at scale. Wiz has observed malware families that offload system analysis and command generation to AI models in real time – like the LameHug malware, which used prompts to inventory systems and exfiltrate data via Hugging Face.

2. AI generates better phishing, social engineering, and deception

LLMs can rapidly produce native-language phishing emails without grammatical tells, craft personalized spear-phishing based on open-source intelligence, and generate deepfake audio or video for convincing business email compromise. Attackers can iterate thousands of variants to evade filters – something impossible before generative models.

3. AI manipulates the systems meant to protect you

Adversarial prompts can coerce models or agents into revealing sensitive information, ignoring guardrails, or taking harmful actions. Wiz’s research into agent frameworks such as MCP shows how over-privileged AI agents can be tricked into executing unintended tool actions, especially when exposed to untrusted content or weak server configurations.

4. AI automates exploit development and lateral movement

Attackers can use AI to chain cloud misconfigurations, identity weaknesses, and vulnerable components into complete attack paths. With models that can write code or reverse-engineer logic, exploit generation becomes faster and more accessible – even for entry-level attackers.

5. AI helps attackers evade traditional detection

Adversarial inputs, polymorphic malware, and AI-altered payloads can bypass traditional ML-based security products by triggering misclassifications. Wiz documented malware that dynamically generated commands through AI to vary its behavior, making static signatures ineffective.

6. AI weaponizes the software supply chain

The AI ecosystem relies heavily on open-source packages, prebuilt containers, and third-party frameworks. Research into incidents like the Base44 vulnerability and s1ngularity NPM compromise shows how attackers target AI dev ecosystems to poison downstream applications.

Bottom line: AI dramatically increases attacker capability. It allows adversaries to scale operations, write better exploits, evade defenses, and compromise AI systems that organizations don’t yet know how to secure.

Real-world examples of AI-powered cyberattacks

AI-driven attacks are no longer theoretical. Wiz Research and the broader security community have observed attackers using AI to enhance operations – and targeting AI systems themselves in the wild. Here are some real-world examples that illustrate how these threats manifest.

Malware using AI as its command engine

Wiz uncovered the LameHug malware family, which didn’t carry static payloads. Instead, it issued prompts to Hugging Face models at runtime to generate reconnaissance commands and exfiltrate sensitive documents.

This marked one of the first known cases of malware outsourcing its logic to an LLM, making traditional detection significantly harder.

AI agent compromise in a public VS Code extension

Wiz’s investigation into the Amazon Q Developer extension compromise showed how an attacker embedded malicious instructions into an AI agent’s toolchain. The AI agent then used its granted permissions to wipe files and cloud resources when triggered.

This demonstrated how small prompt manipulations – hidden inside development tools – can escalate into destructive real-world actions.

Widespread leakage of AI secrets across GitHub

In analysis of leading AI organizations, Wiz found that 65% of Forbes AI 50 companies had leaked AI-related secrets – including model keys, access tokens, service account credentials, and training infrastructure keys.

These credentials could be used to:

  • extract proprietary models

  • access inference APIs

  • tamper with training pipelines

  • access sensitive underlying data

It’s one of the clearest real-world examples of AI supply chain exposure at scale.

Vulnerabilities in AI infrastructure powering name-brand platforms

Multiple Wiz Research investigations revealed severe vulnerabilities in widely deployed AI runtime environments:

These incidents show how the AI compute layer introduces new high-impact attack paths.

Insecure AI-generated code leading to exploitable applications

Wiz analyzed real applications built with “vibe coding” tools and found 20% contained material security issues, including broken access controls and unprotected data endpoints.

This is the real-world outcome of what many organizations fear: AI accelerates coding – but also accelerates vulnerabilities.

Get the 2025 Report on AI in the Cloud

AI adoption is exploding, but so are the risks. See what Wiz Research uncovered about DeepSeek, self-hosted models, and emerging threats in this must-read security report.

Why AI cyberattacks pose unique risks to cloud environments

AI systems don’t live in isolation – they run on top of cloud infrastructure, identity systems, data lakes, container platforms, GPU runtimes, and CI/CD pipelines. That creates an attack surface far larger and more complex than traditional on-premises ML stacks. Wiz Research has repeatedly shown that AI adoption amplifies existing cloud risks and introduces entirely new ones.

Below are the core reasons AI attacks are uniquely dangerous in cloud environments.

1. AI dramatically expands the cloud attack surface

Modern AI applications run across:

  • managed services like SageMaker, Bedrock, Vertex AI, and Azure ML

  • vector databases, feature stores, and model registries

  • ephemeral GPU-backed workloads

  • autonomous agent frameworks

  • SaaS tools that generate or process code

Wiz’s research on the AI attack surface highlights how this creates multiple new entry points—model endpoints, agent toolchains, model-serving containers, and data pipelines—all of which attackers can target.

2. Training and inference data is highly distributed (and often exposed)

Most AI workloads rely on large datasets spread across:

  • S3 buckets

  • Blob stores

  • unmanaged snapshots

  • data lakes in shadow environments

  • dev/test pipelines

  • self-hosted model artifacts

The State of Cloud AI Report found that training data sprawl is accelerating, with self-hosted model deployments jumping from 42% to 75%. This decentralized data footprint increases the likelihood of poisoning, leakage, and unauthorized access.

3. AI workloads rely on complex, privilege-heavy IAM paths

AI jobs often require:

  • broad permissions to read training datasets

  • write access to model stores

  • permissions to invoke GPU workloads

  • cross-account or cross-service access

  • agent tool permissions to run code, modify files, or call APIs

Wiz’s research into MCP agent security and AI agents with over-privileged tools shows how these identity chains become high-impact lateral movement paths if compromised.

A single leaked AI agent key or overly permissive service role can expose entire pipelines.

4. Rapid AI deployment cycles amplify misconfigurations

AI models are updated frequently – sometimes daily – as teams:

  • retrain on new data

  • deploy new versions of agents

  • swap model backends

  • integrate new LLM providers

  • experiment with RAG pipelines

This “ship fast, iterate faster” culture creates drift across:

  • model endpoints

  • inference servers

  • agent tools

  • vector databases

  • cloud IAM policies

When AI infrastructure moves quickly, attackers exploit the mistakes left behind.

5. Shadow AI and unsanctioned tools introduce silent risk

Developers increasingly adopt:

  • unmanaged AI coding assistants

  • unofficial RAG pipelines

  • local LLMs

  • quick-start inference servers

  • AI agents with broad local privileges

Many of these tools never reach security’s radar.

Cloud AI makes organizations faster, but also more exposed. The combination of distributed data, fast-moving pipelines, privileged AI agents, and vulnerable GPU runtimes creates a threat landscape where AI cyberattacks can cause outsized blast radius with minimal attacker effort.

Defending against AI-enhanced threats

AI risks span data, models, pipelines, identities, agents, and cloud infrastructure. Defending against them requires more than traditional AppSec, DLP, or SOC tooling. Organizations need full visibility into their AI attack surface, continuous posture management, and guardrails that cover the entire model lifecycle – from training data to runtime inference.

Below are the essential defenses, aligned to real attack patterns surfaced by Wiz Research.

1. Inventory every AI asset across cloud environments

You cannot secure what you don’t know exists. Create a real-time inventory of:

  • managed AI services (SageMaker, Bedrock, Vertex AI, Azure ML)

  • self-hosted and containerized models

  • vector databases and feature stores

  • model endpoints and APIs

  • AI agents, MCP servers, and toolchains

  • training datasets and snapshots

AI-BOM (AI Bill of Materials) automates this discovery—mapping every model, dataset, endpoint, agent, and dependency across multi-cloud environments so teams can finally see their full AI footprint.

2. Enforce least privilege for AI workloads and agents

AI systems frequently require broad permissions, creating overpowered identities and dangerous lateral movement paths.

Best practices include:

  • tightly scoped IAM roles for training and inference

  • per-model or per-agent access boundaries

  • network segmentation between model endpoints and data stores

  • tool-level permissions for AI agents (MCP, LangChain, custom frameworks)

3. Secure the AI data lifecycle

Data feeds the entire AI pipeline – and is often the easiest attack vector.

Implement:

  • data classification (sensitive, regulated, proprietary)

  • access monitoring for unusual reads or spikes

  • drift detection to identify unexpected data sources

  • lineage tracking to understand how data flows into and through models

4. Protect MLOps pipelines and build systems

MLOps systems are becoming the new software supply chain.

Secure them by:

  • scanning repositories for secrets, model keys, and credentials

  • validating models and artifacts before deployment

  • isolating training and build environments

  • using code-scanning to detect insecure AI-generated code patterns

5. Monitor model behavior and AI service activity

Traditional monitoring doesn't work for AI systems. Instead, organizations should:

  • track model output patterns for drift or manipulation

  • monitor inference API usage (spikes, abnormal tokens, suspicious inputs)

  • detect high-risk prompts targeting jailbreaks or injection

  • alert on anomalous data access by agents or pipelines

Model and agent misuse frequently happens before an attacker steals data-making behavioral visibility critical.

6. Validate inputs and outputs with guardrails

Guardrails are essential to prevent:

  • prompt injection

  • cross-domain agent misuse

  • model hallucinations with security impact

  • harmful tool invocation

Implement:

  • input filtering

  • content sanitization

  • output validation

  • policy-as-code for AI agent tools

  • human approval for high-risk actions

7. Test AI systems with red teaming & adversarial evaluation

AI requires new forms of testing:

  • prompt injection testing

  • jailbreak and safety bypass scenarios

  • evaluation against adversarial examples

  • model extraction defenses

  • data poisoning simulations

Routine red teaming reveals the gaps that defender teams often miss.

8. Integrate AI risk into your cloud security program

AI security doesn’t live in its own silo. It must plug into:

  • CI/CD pipelines

  • identity and access management

  • cloud posture management

  • runtime threat detection

  • vulnerability and misconfig remediation

The most critical AI risks are cloud risks with AI characteristics – not standalone problems.

How Wiz helps organizations defend against AI cyberattacks

AI risks don’t exist in a vacuum – they’re cloud risks with new edges. Wiz provides a unified, cloud-native platform for discovering your AI footprint, securing AI systems from code to cloud, and detecting AI-driven threats before attackers can exploit them.

Wiz’s approach combines AI-BOM, Wiz Code (ASPM), the Wiz Security Graph, and Wiz Defend + SecOps AI Agent into an integrated defense layer built for modern AI workloads.

1. Map your entire AI attack surface with AI-BOM

The first step in defending AI systems is knowing what you’re running. Wiz generates an AI-BOM that automatically discovers and catalogs:

  • models (managed + self-hosted)

  • model endpoints & inference APIs

  • training datasets & snapshots

  • vector DBs, embeddings stores, and RAG pipelines

  • agent tools & MCP servers

  • AI-related secrets and keys

  • AI SaaS usage across cloud accounts

This gives teams full visibility into shadow AI deployments, unmanaged endpoints, and identity paths long before attackers find them.

2. Secure AI agents, endpoints, and pipelines with Wiz Code (ASPM)

Wiz Code delivers Application Security Posture Management, analyzing:

  • model-serving infrastructure

  • code and IaC that deploys AI workloads

  • AI agent tools and capabilities

  • model APIs and authentication

  • data access policies

  • supply-chain dependencies

Through the Wiz Security Graph, Wiz correlates AI-specific issues – like leaked model keys, over-permissive agent roles, exposed inference endpoints, or poisoned data – with cloud context to show which vulnerabilities are actually exploitable.

This prevents silent, high-impact misconfigurations that lead to model abuse, prompt injection amplification, or agent compromise.

3. Prevent AI data exposure with agentless cloud scanning

Wiz’s agentless architecture continuously detects:

  • exposed training datasets

  • public or cross-account model endpoints

  • risky identity paths for AI workloads

  • misconfigured vector databases

  • over-scoped roles for AI pipelines

  • leaked AI secrets and credentials across repos

The platform analyzes cloud identities, network exposure, and data sensitivity to reveal toxic combinations – like an overprivileged agent tied to an exposed model endpoint with access to regulated training data.

4. Detect AI-driven attacks in real time with Wiz Defend + SecOps AI Agent

AI attacks move fast. Wiz Defend protects AI workloads at runtime by detecting:

  • abnormal inference API patterns

  • malicious prompt activity

  • unauthorized model access

  • suspicious agent tool invocation

  • unusual data reads from training stores

  • signs of adversarial manipulation or drift

The Wiz SecOps AI Agent enhances detection and response by autonomously triaging every alert using Wiz’s graph context and IR knowledge base – while providing fully transparent reasoning so teams can trust its decisions. This helps security teams operate at AI speed without sacrificing oversight.

5. Correlate AI risks with cloud posture for unified prioritization

The Wiz Security Graph connects every risk – AI, cloud, identity, and data – into a single contextual model. This allows teams to:

  • surface attack paths involving AI workloads

  • understand blast radius for AI-related misconfigs

  • prioritize remediation based on real exploitability

  • trace issues back to code owners through Wiz Code

This unified view is what prevents AI security from becoming another siloed toolset.

See how Wiz secures AI systems from code to cloud.
Get a demo or Explore Wiz for AI security

Develop AI applications securely

Learn why CISOs at the fastest growing organizations choose Wiz to secure their organization's AI infrastructure.

For information about how Wiz handles your personal data, please see our Privacy Policy.