Wiz Defend est là : détection et réponse aux menaces pour le cloud
AI Security Posture Assessment

Get visibility into your AI pipelines, detects pipeline misconfigurations, and uncovers attack paths to your AI services, allowing you to securely introduce AI into your environment.

What is AI-SPM? [AI Security Posture Management]

AI-SPM (AI security posture management) is a new and critical component of enterprise cybersecurity that secures AI models, pipelines, data, and services.

7 minutes lues

What is AI-SPM?

AI-SPM (AI security posture management) is a new and critical component of enterprise cybersecurity that secures AI models, pipelines, data, and services. In short, AI-SPM helps organizations safely and securely weave AI into their cloud environments.

According to our State of AI in the cloud 2024 report, more than 70% of enterprises use managed AI services such as Azure AI services, Azure OpenAI Service, Azure ML studio, Amazon SageMaker, and Google Cloud’s Vertex AI. Furthermore, around 53% of enterprises use OpenAI or Azure OpenAI software development kits (SDKs), and others use SDKs like Hugging Face’s Transformers, XGBoost, LangChain, Vertex AI SDK, Streamlit, and tiktoken.

AI services and SDKs are susceptible to critical security risks and threats, making AI-SPM the need of the hour. McKinsey reports that the adoption of generative AI (GenAI) can add as much as $4.4 trillion to the global economy, making AI a strategic necessity for most enterprises. However, 91% of mid-market enterprises feel underprepared to adopt AI responsibly. 

Because of these trends, AI coverage gaps in traditional cybersecurity and cloud security solutions are more glaring than ever. Adopting AI without a robust AI security solution can trouble even the most resilient enterprises. The only way for businesses to safely and efficiently adopt AI technologies is to commission a radical and robust AI security solution. Let’s take a closer look at AI-SPM. 

Why is AI-SPM necessary?

As we’ve seen, the proliferation of GenAI and its integrations with mission-critical infrastructure introduces a plethora of security risks that fall outside the visibility and capabilities of most security platforms.

According to Gartner, the four biggest GenAI risks include:

  1. Privacy and data security: To function accurately and efficiently, AI applications require access to large volumes of domain-specific datasets. Threat actors can target these GenAI tools, databases, and application programming interfaces (APIs) to exfiltrate sensitive proprietary data. Furthermore, internal negligence and hidden misconfigurations can expose AI data without an enterprise’s knowledge.

  2. Enhanced attack efficiency: Unfortunately for enterprises, cybercriminals are also adopting GenAI applications to scale and automate their attacks. AI-powered cyberattacks such as smart malware, inference attacks, jailbreaking, prompt injection, and model poisoning are becoming more common than ever before, and businesses can expect relentless attacks on their AI infrastructure.

  3. Misinformation: Merely adopting GenAI and large language models (LLMs) doesn't guarantee measurable benefits. The success of GenAI applications depends on the quality of the output. The adoption of AI introduces the risk of AI hallucinations, which is when AI applications invent information due to insufficient training data. And if threat actors manipulate or corrupt training data, GenAI applications might output wrong or dangerous information.

  4. Fraud and identity risks: With AI capabilities, threat actors can now create deepfakes and fake biometric data to gain access to an enterprise’s AI infrastructure and applications. With fake biometrics, cybercriminals can easily infiltrate SDKs and GenAI APIs to escalate attacks, exfiltrate data, or gain a stronger foothold in enterprise cloud environments.

Any of the above risks could result in data breaches, compliance violations, reputational damage, and major financial setbacks. To understand the scale of damage that AI risks pose, take a look at our research on how Microsoft AI research accidentally exposed 38TB of data. 

Wiz Research’s discovery of the large-scale repercussions caused by a single misconfigured token

In another recent example of potent AI risks, security teams found more than 100 malicious AI models on Hugging Face, a machine learning (ML) platform. Although some of these models carrying malicious payloads could have been security research experiments, their public availability puts enterprises at risk.

The bottom line? Adopting AI-SPM is non-negotiable. Enterprises need a comprehensive AI security solution to ensure proactive risk management, visibility, and discoverability across their AI stack. Failure to secure AI models can undo all the benefits of AI adoption—and even completely dismantle an enterprise’s IT ecosystems.

Key features and capabilities of AI-SPM

In this section, we’ll explore the key features and capabilities of a robust AI-SPM solution.

AI inventory management

Example AI inventory dashboard

AI-SPM solutions can comprehensively inventory all of an enterprise’s AI services and resources, which helps cloud security teams understand what AI assets their enterprise stewards and each asset’s corresponding security risks. Inventorying AI assets also provides enhanced visibility and discoverability.

Full-stack visibility

Example of the visibility an AI-SPM tool should offer

No matter the self-hosted or managed AI services, technologies, and SDKs you use, an AI-SPM solution must ensure complete visibility. Ideally, your AI-SPM solution should guarantee visibility without the need for agents. (An agentless approach to AI security is important because it enables comprehensive coverage without performance compromises.) 

Training data security

High-quality training data is crucial for AI applications’ performance and accuracy. Therefore, AI-SPM solutions must extend existing data security capabilities to include AI training data. It’s just as crucial that an AI-SPM has the ability to address attack paths that lead to training data and remediate exposed or poisoned training data.

Example detection of a fine-tuned model trained on a dataset containing secret data that grants permissions to an AWS IAM user

Real-world example: Researchers from Microsoft, the University of California, and the University of Virginia designed and implemented an AI poisoning attack called Trojan Puzzle. The Trojan Puzzle attack included training AI assistants to generate malicious code, and there’s no doubt that cybercriminals are designing similar weapons to use against enterprises’ GenAI applications and infrastructure.

Attack path analysis

An illustration of Wiz’s DSPM for AI capabilities

By analyzing AI models and pipelines with business, cloud, and workload contexts, an optimal AI-SPM solution provides a comprehensive view of attack paths within AI environments. The best AI-SPM solutions address attack paths early, not after they mature into large-scale AI security risks. To identify and analyze attack paths more comprehensively and accurately, AI-SPM solutions should also include AI model scanning.

Built-in AI configuration rules

AI-SPM solutions should allow businesses to establish fundamental AI security baselines and controls. By cross-referencing a business’s AI configuration rules with AI services in real time, an AI-SPM solution can proactively detect misconfigurations such as exposed IP addresses and endpoints.

Tools for developers and data scientists

AI-SPM solutions have to be dev-friendly. That’s why the ability to triage AI security risks is one of the most important capabilities an AI-SPM tool can offer, especially for developers and data scientists. By offering risk triaging, an AI-SPM solution ensures that developers and data scientists have a contextualized and prioritized view of risks within the risk pipeline.

Other dev-friendly capabilities and tools include project-based workflows and role-based access controls (RBAC), which let the AI-SPM solution route security vulnerabilities and alerts to relevant teams. Alerting is critical: Timely alerts facilitate swift and proactive remediation of AI-related security issues. Another AI-SPM benefit involves providing teams with a personalized and prioritized view of vulnerabilities in their AI-incorporating projects. This can nurture a security culture focused on clarity and accountability.

AI pipeline misuse detection 

Example detection of an EC2 instance that is hosting PyTorch models that would execute malicious code if loaded

In addition to proactively pruning down the AI attack surface and minimizing risks, AI-SPM solutions can detect if threat actors are hijacking an enterprise’s AI pipeline or if a user, either internal or external, is misusing an AI model. By providing customizable threat-detection rules to enforce across AI services and pipelines, AI-SPM covers all potential misuse scenarios.

DSPM vs. CSPM vs. ASPM vs. AI-SPM

In this section, we’ll highlight three critical security solutions that are similar to AI-SPM and explain how and why AI-SPM can round out a cybersecurity stack.

A DSPM (data security posture management) solution protects enterprise data, including PII, PHI, PCI, and secrets, across public and private buckets, serverless functions, hosted database servers, cloud-managed SQL databases, and other mission-critical platforms.

A CSPM (cloud security posture management) solution provides visibility, context, and remediation capabilities to prioritize and address cloud misconfigurations in real time.

An ASPM (application security posture management) solution provides a holistic set of tools and capabilities to secure custom applications as well as the entirety of the software development life cycle (SDLC).

As highlighted in the previous sections, an AI-SPM solution provides dedicated security capabilities for unique AI security threats and risks. AI-SPM addresses the critical deficiency in many of these other security solutions: the ability to comprehensively secure AI models and assets. For instance, AI-SPM extends DSPM visibility into AI training data, protects cloud-based GenAI models with techniques like tenant isolation, and addresses unique AI risks across SDLCs that traditional ASPM applications may not have addressed.

AI-SPM addresses security risks that no other solution comprehensively tackles. In the contemporary threat landscape, no cybersecurity solution is complete without a powerful and holistic AI-SPM component. If businesses want to accelerate their AI adoption journey and evade the deluge of AI-related security threats, they must commission a cutting-edge AI-SPM solution.

Wiz's approach to AI-SPM

To gain a deep understanding of the AI services and risks in your environments, you need a world-class AI-SPM solution. When it comes to AI-SPM, Wiz is a trailblazer. Wiz was the first to coin the term AI-SPM and weave AI-SPM capabilities into its CNAPP solution. By choosing Wiz’s AI-SPM solution, you know you’re getting cutting-edge technology.

Wiz’s AI-SPM solutions provide full-stack visibility into AI pipelines, misconfigurations, data, and attack paths. With the protection of Wiz, you can adopt AI services and technologies for your mission-critical applications without any fear of internal or external AI security complications.

Orange builds many Generative AI services using OpenAI. Wiz’s support for Azure OpenAI Service gives us significantly improved visibility into our AI pipelines and allows us to proactively identify and mitigate the risks facing our AI development teams.

Steve Jarrett, Chief AI Officer, Orange

Wiz is also a founding member of the Coalition for Secure AI (CoSAI), an open-source initiative designed to give all practitioners and developers the best practices and tools they need to create Secure-by-Design AI systems.

Looking to set AI trends just like Wiz? All you need is a top-of-the-line AI-SPM solution, and Wiz has you covered. Get a demo now to see how our AI-SPM capabilities can help you strengthen and secure everything AI.

Accelerate AI Innovation, Securely

Learn why CISOs at the fastest growing companies choose Wiz to secure their organization's AI infrastructure.

Demander une démo 

Continuer la lecture

What is a Data Risk Assessment?

Équipe d'experts Wiz

A data risk assessment is a full evaluation of the risks that an organization’s data poses. The process involves identifying, classifying, and triaging threats, vulnerabilities, and risks associated with all your data.

AI Governance: Principles, Regulations, and Practical Tips

Équipe d'experts Wiz

In this guide, we’ll break down why AI governance has become so crucial for organizations, highlight the key principles and regulations shaping this space, and provide actionable steps for building your own governance framework.