Generative AI Security: Risks & Best Practices

Équipe d'experts Wiz
Main takeaways from Generative AI Security:
  • Generative AI security is a full-stack discipline that protects models, data pipelines, infrastructure, and interfaces throughout the entire AI lifecycle.

  • Primary risks include model vulnerabilities like data poisoning, sensitive data exposure through training sets or outputs, malicious misuse such as creating deepfakes, and navigating complex compliance frameworks like the EU AI Act.

  • Effective security programs are guided by frameworks like the OWASP Top 10 for LLM Applications and the NIST AI Risk Management Framework (RMF) to establish strong governance.

  • Core best practices include creating a complete AI Bill of Materials (AI-BOM) for visibility, implementing zero-trust access controls, and developing GenAI-specific incident response plans.

  • A unified platform like Wiz AI-SPM is crucial for discovering shadow AI, analyzing attack paths, and using cloud context to prioritize and remediate risks across your AI and cloud environments.

What is generative AI security? 

Generative AI security protects organizations from unique risks created by AI systems that generate content, code, or data. This specialized cybersecurity discipline addresses threats like prompt injection, model theft, and data poisoning that traditional security tools can't detect. Organizations implement GenAI security through technical controls, governance policies, and specialized AI security platforms.

Generative AI creates new content from training data—text, images, code, or videos. Popular examples include ChatGPT for text generation and DALL-E for image creation. These systems introduce security challenges because they process sensitive data and can be manipulated to produce harmful outputs.

GenAI offers massive productivity gains, but only when security risks are properly managed. Unprotected AI systems expose organizations to data breaches and compliance violations, particularly when employees find ways around their organization's restrictions, as one study found a majority of employees have done.

GenAI Security Best Practices Cheat Sheet

This cheat sheet provides a practical overview of the 7 best practices you can adopt to start fortifying your organization’s GenAI security posture.

AI's rapid advancement comes a host of new security challenges. The research team at Wiz have been at the forefront of identifying and analyzing these emerging threats. Their research has uncovered several critical vulnerabilities and risks specific to AI systems and infrastructure, including:

These findings highlight the urgent need for enhanced security measures in AI development and deployment, emphasizing the importance of continuous vigilance in this rapidly evolving field.

Figure 1: Wiz: The first CNAPP to provide AI security for OpenAI customers

What are the main risks associated with GenAI?

GenAI security risks fall into four critical categories:

Model vulnerabilities

Model vulnerabilities target the AI algorithms themselves, exploiting weaknesses in how models process inputs and generate outputs. These attacks can compromise model integrity and steal proprietary algorithms.

  • LLM security risks are often exploited via adversarial attacks, where cybercriminals manipulate input data to mess with the model's output.

  • Data poisoning is a common technique used to breach LLM security, involving the corruption of AI and machine learning (ML) model training data. For example, researchers demonstrated that feeding a model just 50 poisoned images was enough to make Stable Diffusion produce bizarre, malformed outputs.

  • Another dangerous LLM security risk is model theft. Model theft occurs when threat actors breach unsecured GenAI models to use them for malicious purposes or even steal them. An example? TPUXtract, a recently unveiled attack method that can help criminals steal AI models.

Data-related risks

Data-related risks threaten the sensitive information that powers AI systems. Training datasets often contain proprietary business data, customer information, or confidential documents that attackers can extract or expose.

Sensitive data exposure is perhaps the most potent data-related AI security risk. If businesses fail to anonymize their training data, it can be intercepted or exposed. And if businesses fail to secure their APIs or third-party sharing protocols, it could lead to similar disasters.

Don't forget about data breaches, either. If threat actors breach GenAI applications and tools, we're talking about millions in potential damages.

Misuse scenarios

Misuse scenarios occur when attackers weaponize AI systems to create harmful content. This includes generating deepfakes for fraud, creating malicious code, or producing biased outputs that damage reputation or enable discrimination.

Malicious products of GenAI, such as deepfakes, can do more than harm the reputation of individuals and organizations, though. Consider this particularly problematic scenario: criminals using deepfakes to bypass biometric security systems. When threat actors bypass these systems, they can access even deeper vaults of sensitive enterprise data.

Compliance and governance risks

Compliance and governance risks emerge from evolving AI regulations and existing privacy laws. Organizations must navigate frameworks like the EU AI Act while ensuring GenAI systems comply with GDPR, CCPA, and industry-specific requirements.

Spearheading this influx of new AI compliance regulations and frameworks is the EU AI Act. Some regulations (like the EU AI Act) are mandatory, while others are more like guidelines—which is why organizations need to pay close attention to untangle the web of AI compliance.

What are some frameworks and principles that can help secure GenAI?

Security frameworks provide structured approaches to managing GenAI risks. These established guidelines help organizations implement comprehensive AI security programs:

  • OWASP Top 10 for LLM Applications: This list from OWASP acknowledges that the proliferation of LLM applications brings numerous AI security risks. It provides 10 major LLM security risks, including training data poisoning and prompt injection, and offers suggestions and strategies for how to avoid them or keep them under control.

  • Gartner’s AI TRiSM: AI TRiSM is a framework designed to help you stay on top of AI security risks and build a strong AI governance posture. It has four main components: explainability / model monitoring, ModelOps, AI application security, and privacy. By using AI TRiSM, you can cultivate trust among customers and peers, fortify GenAI pipelines, and comply with AI laws and regulations.

Figure 2: Gartner’s AI TRiSM framework (Source: Gartner)
  • NIST AI RMF: The NIST AI RMF provides a step-by-step approach to securing the AI lifecycle. The four main steps of the NIST AI framework are Govern, Map, Measure, and Manage. To better address the unique challenges of modern systems, NIST released a Generative Artificial Intelligence Profile in July 2024. The NIST AI RMF also weaves in ethical and social considerations, which are crucial aspects of GenAI security.

  • FAIR-AIR Approach Playbook: A product of the FAIR Institute, the FAIR-AIR playbook tackles five attack vectors associated with GenAI, including shadow GenAI, managed LLMs, and active cyberattacks. The playbook also has five main steps, starting with contextualizing GenAI risks and ending with making decisions regarding mitigation.

  • Architectural Risk Analysis of LLM: Published by the Berryville Institute for Machine Learning, this document is a comprehensive look into LLM security risks, with a whopping 81 LLM security risks. It's a great resource for everyone from CISOs to policymakers. And don’t worry about getting lost in this long list: The document also magnifies the top 10 LLM security risks you need to look out for.

  • AWS Generative AI Security Scoping Matrix: This unique security resource from AWS breaks down GenAI security into distinct use cases. The five use cases or "scopes" include consumer apps, enterprise apps, pre-trained models, fine-tuned models, and self-trained models. So no matter what kind of GenAI applications you're working with, you'll find specific ways to address AI security risks.

  • MITRE ATLAS: Introduced as a supporting resource to the MITRE ATT&CK framework, MITRE ATLAS is a knowledge base that includes the latest information on attack techniques used against AI applications. It includes 91 attack techniques across 14 different kinds of tactics. Crucially, MITRE ATLAS also suggests detailed mitigation guidelines and strategies for each of these attack types. If you're looking for specific ways to address adversarial AI attacks, MITRE ATLAS is a good bet.

  • Secure AI Framework (SAIF): A Google Cloud initiative, SAIF is a conceptual framework that can help you keep your AI systems out of harm's way. SAIF highlights pressing AI security risks and also includes controls to mitigate them. If you want to understand AI security specific to your organization, consider using SAIF's Risk Self-Assessment Report. 

GenAI security best practices

GenAI security best practices provide actionable steps to protect your AI systems and data. Implement these measures in priority order to build comprehensive protection:

Prioritize your AI bill of materials (AI-BOM)

Example of an AI-BOM filtered for the Azure AI services

Complete AI visibility starts with cataloging every AI system in your organization. An AI Bill of Materials (AI-BOM) identifies all AI models, training datasets, APIs, and tools across your environment. This inventory reveals shadow AI deployments and provides the foundation for risk assessment and security controls.

Implementation example:

  • Document all AI models in use, including vendor models (e.g., OpenAI, Anthropic), custom models, and embedded models in third-party applications

  • Map data flows to understand where training and inference data comes from and how it's processed

  • Use automated discovery tools to identify undocumented AI systems (shadow AI)

Potential Success metrics:

  • 100% inventory of production GenAI systems

  • Reduce unknown AI assets by 90% within the first two inventory cycles

  • Complete data lineage for all training datasets

Key stakeholders:

  • Security Teams: Lead the inventory process

  • Data Science/ML Teams: Provide information about models and data

  • IT: Assist with infrastructure mapping

  • Business Units: Disclose departmental AI use

Implement zero-trust controls

Zero-trust controls assume no AI system or user is inherently trustworthy. This approach implements least privilege access, continuous authentication, and real-time monitoring for all AI interactions. Zero-trust architecture prevents unauthorized access to models and training data while detecting suspicious behavior.

Implementation example:

  • Apply identity-based access controls to all GenAI endpoints (e.g., require SSO for access to model APIs)

  • Implement context-based API rate limiting (e.g., per user, per endpoint, or per model access level).

  • Set up continuous monitoring of user interactions with models to detect anomalies

  • Segment your GenAI environment from other production systems

Potential Success metrics:

  • 100% of GenAI systems accessible only through authenticated and authorized channels

  • 90%+ reduction in privileged account access to training data

  • No major data leakage incidents with regulatory impact; detection and remediation of minor incidents within 24 hours

  • Complete audit logs for all model interactions

Key stakeholders:

  • CISO Office: Strategy and oversight

  • Security Engineering: Implementation of controls

  • ML Operations: Integration with AI pipelines

  • IAM Team: User access management

Secure your GenAI data

Data security protects the sensitive information that powers AI systems. Start by mapping data flows from collection through training to inference. Implement encryption for data at rest, tokenization for sensitive fields, and input sanitization to prevent injection attacks.

Implementation example:

  • Encrypt all training data at rest using AES-256

  • Use regex filtering in combination with machine-learning-based anomaly detection to detect prompt injection.

  • Apply differential privacy techniques to protect sensitive information in training data

  • Create role-based access controls for different datasets based on sensitivity

Potential Success metrics:

  • 100% of sensitive data encrypted or anonymized

  • Zero data leakage incidents from GenAI models

  • Complete validation of all user inputs before processing

  • Comprehensive data protection applied to all stages (training, fine-tuning, inference)

Key stakeholders:

  • Data Security Team: Lead implementation

  • Data Science/ML Teams: Adapt models to work with protected data

  • Privacy Office: Ensure compliance with data protection regulations

  • Development Teams: Implement input validation controls

Untangle your AI compliance requirements

Compliance mapping identifies which regulations apply to your AI systems. It is crucial to create a matrix linking each AI use case to relevant requirements from frameworks like the EU AI Act, especially since violations can result in fines of up to €40 million or 7% of worldwide turnover. This mapping guides the technical controls and documentation required to avoid such penalties.

Implementation example:

  • Create a regulatory mapping matrix specific to your GenAI use cases

  • Implement technical controls required by regulations (e.g., GDPR's right to explanation for AI decisions)

  • Develop a compliance calendar for upcoming AI regulations

  • Establish a data sovereignty framework to ensure local processing when required

Potential Success metrics:

  • 100% documentation of applicable regulations for each GenAI use case

  • Zero compliance violations in quarterly reviews

  • Complete impact assessments for high-risk AI applications

  • Successful demonstration of controls during regulatory audits

Key stakeholders:

  • Legal/Compliance: Primary owners for regulatory mapping and documentation

  • Data Science Teams: Responsible for implementing technical controls

  • CISO Office: Accountable for overall compliance strategy

  • Procurement: Ensuring vendor AI systems meet compliance standards

Kickstart GenAI-specific incident response plans

No matter how strong your AI security measures are, you're still going to face incidents. By developing GenAI-specific incident response plans, especially those with a dash of automation to support your incident response teams, you can catch and contain incidents early.

Implementation example:

  • Create playbooks for common GenAI incidents (data leakage, model manipulation, harmful outputs)

  • Implement automated detection and response for known patterns (e.g., automatic shutdown for prompt injection attempts)

  • Establish clear escalation paths for different types of AI incidents

  • Conduct tabletop exercises simulating GenAI security breaches

Success metrics:

  • Containment of critical GenAI incidents within 30 minutes, with full resolution within defined SLAs

  • 100% of incident responders trained on AI-specific scenarios

  • Quarterly testing and updating of response playbooks

  • Post-incident analysis completed for all AI security events

Key stakeholders:

  • Incident Response Team: Plan development and execution

  • AI/ML Operations: Technical response capabilities

  • Communications Team: Managing external communications during incidents

  • Legal: Addressing compliance implications of incidents

Get visibility into security posture with AI security tools

AI security tools provide specialized protection for AI systems that traditional security solutions can't address. AI Security Posture Management (AI-SPM) platforms offer continuous monitoring, risk assessment, and automated remediation for AI-specific threats like model theft and prompt injection.

Potential Success metrics:

  • Security tools deployed to cover all mission-critical GenAI applications, with continuous monitoring for coverage gaps

  • 90%+ of critical vulnerabilities remediated within 30 days

  • Automated detection of model drift and anomalous behavior

  • Complete visibility into AI attack paths and security posture

Key stakeholders:

  • Security Engineering: Tool selection and implementation

  • DevSecOps: Integration into development pipelines

  • ML Operations: Day-to-day management

  • Risk Management: Use of tool data for risk assessments

How Wiz can help you with GenAI security

Wiz AI-SPM provides comprehensive protection for GenAI systems through full-stack visibility, automated risk detection, and contextual remediation guidance. As organizations increasingly adopt AI – with Wiz's State of AI in the Cloud 2025 finding over 85% using either managed or self-hosted AI services—the pioneer of AI Security Posture Management offers complete AI inventory management, attack path analysis, and integration with existing cloud security workflows.

Figure 3: Wiz AI-SPM provides unparalleled visibility into GenAI security risks

So what does Wiz AI-SPM offer? Full-stack visibility into GenAI pipelines? Check. Continuous detection of GenAI risks and misconfigurations? Check. Analyses of AI attack paths? Check. A light on shadow AI? Check. Wiz is the pioneer of AI-SPM, so we’ve always been one step ahead of AI security risks.

Get a demo now to see how Wiz can secure GenAI in your organization.

Accelerate AI Innovation

Securely Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Pour plus d’informations sur la façon dont Wiz traite vos données personnelles, veuillez consulter notre Politique de confidentialité.

Frequently asked questions about generative AI security