Wiz Defend is Here: Threat detection and response for cloud
AI Security Posture Assessment

Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.

What is a Data Poisoning Attack?

Data poisoning is a kind of cyberattack that targets the training data used to build artificial intelligence (AI) and machine learning (ML) models.

Équipe d'experts Wiz
5 minutes lues

What is data poisoning?

Data poisoning is a kind of cyberattack that targets the training data used to build artificial intelligence (AI) and machine learning (ML) models. Attackers try to slip misleading or incorrect information into the training dataset. This can be done by adding new data, changing existing data, or even deleting some data.

Potential impact of an attack

Systems that depend on data can become considerably less reliable and effective due to data poisoning. The following are some possible effects of these attacks.

ImpactDescription
Biases introduced into decision-makingMalicious data can introduce biases that skew results and decisions based on the poisoned data set. For instance, incorporating inaccurate or biased data into a financial model can result in bad investment choices that negatively affect the organization’s financial stability. Similarly, biased data in the medical field may result in inaccurate diagnosis and treatment recommendations, possibly jeopardizing patients’ health.
Reduced accuracy, precision, and recallPoisoned data can degrade predictive models’ overall accuracy, precision, and recall. Unreliable outputs and increased error rates may follow, compromising entire systems. This could entail focusing on the incorrect demographic in fields like marketing or overlooking real concerns in cybersecurity. The reduced effectiveness of these models undermines their value and can lead to significant losses.
Potential for system failure or exploitationA system may fail or become vulnerable to additional attacks due to data poisoning. In a type of data poisoning known as a backdoor attack, certain triggers are introduced into the data set; when these triggers are encountered, the system behaves unpredictably, allowing hackers to bypass security measures or manipulate system outputs for malicious purposes.

In critical infrastructure, vulnerabilities introduced via backdoor attacks can have severe consequences. For instance, the attempts of the LAPSUS$ hacker group to poison AI model data used a combination of tactics, including setting up a backdoor to gain system access. 

How a data poisoning attack works

Understanding the mechanisms behind data poisoning empowers you to safeguard your organization against such attacks. Only then can you identify associated behavior, observe patterns, and devise the appropriate mitigation plan. 

  1. Injecting false data: Attackers manipulate a data set by adding fictitious or deceptive data points, which results in inaccurate training and predictions. For example, manipulating a recommendation system to include false customer ratings can change how people judge a product's quality.

  2. Modifying existing data: Genuine data points are altered to introduce errors and mislead the system without adding any new data. An example is changing the values in a financial transaction database to compromise fraud detection systems or create miscalculations around accrued profits/losses.

  3. Deleting data: Removing critical data points creates gaps that lead to poor model generalization. For example, a cybersecurity system may become blind to certain network attacks if data from the attacks is deleted.

Targeted vs. non-targeted attacks

In targeted attacks, malicious actors aim to achieve specific outcomes, such as causing a system to misclassify certain inputs. Backdoor attacks fall into this category, where specific triggers cause the system to behave in a predefined way. For instance, a security camera system might be programmed to disregard trespassers using a specific disguise.

In non-targeted attacks, hackers seek access to any system they can break into and then figure out how to profit from the exploit. They are opportunistic in nature and not directed at a particular server, OS version, or framework. For example, a ransomware kit that scans open-source repositories for exposed secrets like API keys and access tokens will find all the secrets it possibly can. Threat actors will then search the list for opportunities to break into systems and hold data for ransom.

Real-world examples

Real-world instances of data poisoning highlight the practical dangers of these attacks.

Adversarial attacks on language models

Studies have demonstrated that tampering with language models' training data can produce inaccurate or damaging material. For instance, injecting biased data into a language model could generate politically biased news articles.

Backdoor attacks on image recognition systems

In one paper titled “Data Poisoning: A New Threat to Artificial Intelligence,” MIT’s student AI group, LabSix, reportedly tricked Google’s object recognition AI into mistaking a turtle for a rifle. All it took was some minor pixel modifications. Such attacks could be used to bypass facial recognition systems in security applications.

Poisoning attacks in autonomous vehicles

Contaminated training data for self-driving vehicles can lead to unsafe driving behaviors, such as misinterpreting traffic signs. For instance, modifying data to misrepresent stop signs as yield signs will lead to accidents. These kinds of attacks can have devastating results in the real world and present yet another reason why data poisoning should never be taken lightly.

Detection and prevention techniques

Defending against data poisoning requires a comprehensive approach. Combining robust data management with advanced detection techniques can make a big difference in countering threat actors.

Example visualization of potential data leakage or poisoning of SageMaker datasets from publicly exposed bucket

Robust data validation

Strict validation procedures can stop the introduction of tainted data:

  • Data provenance: Monitoring the provenance and history of data helps locate and remove potentially harmful data sources; reliable data sources can prevent data poisoning.

  • Cross-validation: Validating the model on several data subsets uncovers anomalies and inconsistencies, lowering the possibility of overfitting to tainted data; this helps realize model performance within the expected improvement margin.

Anomaly detection algorithms

Sophisticated algorithms can uncover data anomalies that point to poisoning attempts:

  • Statistical methods: These find anomalies and trends that could point to data manipulation; clustering techniques, for example, can identify data points that highly deviate from the mean.

  • Machine learning-based detection: Another layer of protection, ML models can identify patterns common to tainted data; this helps keep tabs on metrics and the functioning of models working directly with data.

Regular system audits

Periodic system audits can guarantee data dependability and identify early indicators of data poisoning:

  • Performance monitoring: It is possible to identify unusual declines in accuracy, precision, or recall that may be signs of poisoning by continuously tracking system performance on a validation set.

  • Behavioral analysis: Analyzing system behavior on specific test cases or edge cases can reveal vulnerabilities caused by data poisoning; these vulnerabilities are caused by data being ingested from an unsolicited source not recognized by the organization.

Adversarial training

Adversarial techniques can teach systems how to detect and withstand poisoned data:

  • Adversarial examples: Introducing adversarial examples into training can help the system recognize and resist manipulation attempts.

  • Defensive distillation: Used in deep neural networks, student networks become accustomed to the normal output of a teacher network over time, spotting anomalies and strengthening its security posture.

Data integrity is crucial, as it continues to be the primary factor in decision-making across many industries, especially AI. Maintaining an advantage over competitors and guaranteeing the reliability and security of data-driven systems both depend on ongoing innovation and cooperation. 

Wiz AI Security Posture Management (AI-SPM)

AI-SPM stands for AI Security Posture Management, a set of capabilities designed to secure AI pipelines and accelerate AI adoption while protecting against AI-related risks in cloud environments. Wiz became the first Cloud Native Application Protection Platform (CNAPP) to introduce AI-SPM capabilities by extending its platform to provide native AI security features fully integrated across the Wiz platform. 

Wiz visualization for potential data poisoning of Vertex AI datasets imported from a publicly exposed Google Cloud Storage bucket

Wiz's AI Security Posture Management (AI-SPM) capabilities offer several features to detect and mitigate data poisoning risks in AI systems:

  1. Full-stack visibility: Wiz's AI-BOM provides comprehensive visibility into AI pipelines, services, technologies, and SDKs without requiring agents. This visibility helps organizations identify potential entry points for data poisoning attacks.

  2. Data security for AI: Wiz extends its Data Security Posture Management (DSPM) capabilities to AI, automatically detecting sensitive training data and identifying risks of data leakage. This helps protect against unauthorized access or manipulation of training data that could lead to poisoning.

  3. Attack path analysis: Wiz's attack path analysis is extended to AI systems, allowing organizations to detect potential attack paths to AI models and training data. This helps identify vulnerabilities that could be exploited for data poisoning.

  4. AI misconfigurations detection: Wiz enforces secure configuration baselines for AI services with built-in rules and AI risk management to detect misconfigurations. Proper configurations can help prevent unauthorized access to training data and models.

  5. Model scanning: Wiz offers model scanning capabilities that can detect potential issues in AI models, including signs of data poisoning or unexpected behaviors resulting from compromised training data. Learn more ->

  6. AI Security Dashboard: Wiz provides an AI security dashboard that offers an overview of top AI security issues, including a prioritized queue of risks. This helps AI developers and security teams quickly identify and address potential data poisoning threats.

Example of Wiz's AI security dashboard

By combining these capabilities, Wiz's AI-SPM solution enables organizations to proactively identify and mitigate data poisoning risks across their AI infrastructure, from training data to deployed models.

Explore our site to learn more about AI security tools, AI security risks, and other topics, and sign up for a demo to see Wiz in action today. 

Develop AI Applications, Securely

Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Demander une démo 

Continuer la lecture

The EU Artificial Intelligence Act: A tl;dr

Équipe d'experts Wiz

In this post, we’ll bring you up to speed on why the EU put this law in place, what it involves, and what you need to know as an AI developer or vendor, including best practices to simplify compliance.

What is Application Security (AppSec)?

Application security refers to the practice of identifying, mitigating, and protecting applications from vulnerabilities and threats throughout their lifecycle, including design, development, deployment, and maintenance.